I0308 14:45:16.969611 6 e2e.go:224] Starting e2e run "6cdfd12c-614b-11ea-b38e-0242ac11000f" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1583678716 - Will randomize all specs Will run 201 of 2164 specs Mar 8 14:45:17.098: INFO: >>> kubeConfig: /root/.kube/config Mar 8 14:45:17.100: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 8 14:45:17.109: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 8 14:45:17.127: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 8 14:45:17.127: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 8 14:45:17.127: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 8 14:45:17.133: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 8 14:45:17.133: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 8 14:45:17.133: INFO: e2e test version: v1.13.12 Mar 8 14:45:17.133: INFO: kube-apiserver version: v1.13.12 SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 14:45:17.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset Mar 8 14:45:17.220: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-74p9g [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StatefulSet Mar 8 14:45:17.263: INFO: Found 0 stateful pods, waiting for 3 Mar 8 14:45:27.267: INFO: Found 2 stateful pods, waiting for 3 Mar 8 14:45:37.268: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 14:45:37.268: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 14:45:37.268: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 8 14:45:37.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-74p9g ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 8 14:45:37.483: INFO: stderr: "I0308 14:45:37.398580 38 log.go:172] (0xc00085e2c0) (0xc000736640) Create stream\nI0308 14:45:37.398618 38 log.go:172] (0xc00085e2c0) (0xc000736640) Stream added, broadcasting: 1\nI0308 14:45:37.403333 38 log.go:172] (0xc00085e2c0) Reply frame received for 1\nI0308 14:45:37.403377 38 log.go:172] (0xc00085e2c0) (0xc000684e60) Create stream\nI0308 14:45:37.403389 38 log.go:172] (0xc00085e2c0) (0xc000684e60) Stream added, broadcasting: 3\nI0308 14:45:37.404504 38 log.go:172] (0xc00085e2c0) Reply frame received for 3\nI0308 14:45:37.404546 38 log.go:172] (0xc00085e2c0) (0xc0000dc000) Create stream\nI0308 14:45:37.404562 38 log.go:172] (0xc00085e2c0) (0xc0000dc000) Stream added, broadcasting: 5\nI0308 14:45:37.405427 38 log.go:172] (0xc00085e2c0) Reply frame received for 5\nI0308 14:45:37.478747 38 log.go:172] (0xc00085e2c0) Data frame received for 5\nI0308 14:45:37.478782 38 log.go:172] (0xc0000dc000) (5) Data frame handling\nI0308 14:45:37.478801 38 log.go:172] (0xc00085e2c0) Data frame received for 3\nI0308 14:45:37.478807 38 log.go:172] (0xc000684e60) (3) Data frame handling\nI0308 14:45:37.478814 38 log.go:172] (0xc000684e60) (3) Data frame sent\nI0308 14:45:37.478823 38 log.go:172] (0xc00085e2c0) Data frame received for 3\nI0308 14:45:37.478827 38 log.go:172] (0xc000684e60) (3) Data frame handling\nI0308 14:45:37.480600 38 log.go:172] (0xc00085e2c0) Data frame received for 1\nI0308 14:45:37.480640 38 log.go:172] (0xc000736640) (1) Data frame handling\nI0308 14:45:37.480661 38 log.go:172] (0xc000736640) (1) Data frame sent\nI0308 14:45:37.480682 38 log.go:172] (0xc00085e2c0) (0xc000736640) Stream removed, broadcasting: 1\nI0308 14:45:37.480713 38 log.go:172] (0xc00085e2c0) Go away received\nI0308 14:45:37.480905 38 log.go:172] (0xc00085e2c0) (0xc000736640) Stream removed, broadcasting: 1\nI0308 14:45:37.480929 38 log.go:172] (0xc00085e2c0) (0xc000684e60) Stream removed, broadcasting: 3\nI0308 14:45:37.480938 38 log.go:172] (0xc00085e2c0) (0xc0000dc000) Stream removed, broadcasting: 5\n" Mar 8 14:45:37.483: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 8 14:45:37.483: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 8 14:45:47.514: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 8 14:45:57.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-74p9g ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:45:57.778: INFO: stderr: "I0308 14:45:57.702416 61 log.go:172] (0xc00015c840) (0xc0005f1360) Create stream\nI0308 14:45:57.702464 61 log.go:172] (0xc00015c840) (0xc0005f1360) Stream added, broadcasting: 1\nI0308 14:45:57.704646 61 log.go:172] (0xc00015c840) Reply frame received for 1\nI0308 14:45:57.704697 61 log.go:172] (0xc00015c840) (0xc0007cc000) Create stream\nI0308 14:45:57.704710 61 log.go:172] (0xc00015c840) (0xc0007cc000) Stream added, broadcasting: 3\nI0308 14:45:57.705901 61 log.go:172] (0xc00015c840) Reply frame received for 3\nI0308 14:45:57.705948 61 log.go:172] (0xc00015c840) (0xc0005ee000) Create stream\nI0308 14:45:57.705963 61 log.go:172] (0xc00015c840) (0xc0005ee000) Stream added, broadcasting: 5\nI0308 14:45:57.707150 61 log.go:172] (0xc00015c840) Reply frame received for 5\nI0308 14:45:57.774103 61 log.go:172] (0xc00015c840) Data frame received for 5\nI0308 14:45:57.774166 61 log.go:172] (0xc00015c840) Data frame received for 3\nI0308 14:45:57.774198 61 log.go:172] (0xc0007cc000) (3) Data frame handling\nI0308 14:45:57.774213 61 log.go:172] (0xc0007cc000) (3) Data frame sent\nI0308 14:45:57.774225 61 log.go:172] (0xc00015c840) Data frame received for 3\nI0308 14:45:57.774233 61 log.go:172] (0xc0007cc000) (3) Data frame handling\nI0308 14:45:57.774261 61 log.go:172] (0xc0005ee000) (5) Data frame handling\nI0308 14:45:57.775985 61 log.go:172] (0xc00015c840) Data frame received for 1\nI0308 14:45:57.776008 61 log.go:172] (0xc0005f1360) (1) Data frame handling\nI0308 14:45:57.776030 61 log.go:172] (0xc0005f1360) (1) Data frame sent\nI0308 14:45:57.776042 61 log.go:172] (0xc00015c840) (0xc0005f1360) Stream removed, broadcasting: 1\nI0308 14:45:57.776109 61 log.go:172] (0xc00015c840) Go away received\nI0308 14:45:57.776178 61 log.go:172] (0xc00015c840) (0xc0005f1360) Stream removed, broadcasting: 1\nI0308 14:45:57.776198 61 log.go:172] (0xc00015c840) (0xc0007cc000) Stream removed, broadcasting: 3\nI0308 14:45:57.776207 61 log.go:172] (0xc00015c840) (0xc0005ee000) Stream removed, broadcasting: 5\n" Mar 8 14:45:57.778: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 8 14:45:57.778: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 8 14:46:27.800: INFO: Waiting for StatefulSet e2e-tests-statefulset-74p9g/ss2 to complete update Mar 8 14:46:27.800: INFO: Waiting for Pod e2e-tests-statefulset-74p9g/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Mar 8 14:46:37.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-74p9g ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 8 14:46:37.988: INFO: stderr: "I0308 14:46:37.922267 84 log.go:172] (0xc0001306e0) (0xc000693720) Create stream\nI0308 14:46:37.922308 84 log.go:172] (0xc0001306e0) (0xc000693720) Stream added, broadcasting: 1\nI0308 14:46:37.924625 84 log.go:172] (0xc0001306e0) Reply frame received for 1\nI0308 14:46:37.924670 84 log.go:172] (0xc0001306e0) (0xc00052a000) Create stream\nI0308 14:46:37.924684 84 log.go:172] (0xc0001306e0) (0xc00052a000) Stream added, broadcasting: 3\nI0308 14:46:37.926725 84 log.go:172] (0xc0001306e0) Reply frame received for 3\nI0308 14:46:37.926752 84 log.go:172] (0xc0001306e0) (0xc0000ee000) Create stream\nI0308 14:46:37.926760 84 log.go:172] (0xc0001306e0) (0xc0000ee000) Stream added, broadcasting: 5\nI0308 14:46:37.927348 84 log.go:172] (0xc0001306e0) Reply frame received for 5\nI0308 14:46:37.985633 84 log.go:172] (0xc0001306e0) Data frame received for 5\nI0308 14:46:37.985649 84 log.go:172] (0xc0000ee000) (5) Data frame handling\nI0308 14:46:37.985672 84 log.go:172] (0xc0001306e0) Data frame received for 3\nI0308 14:46:37.985695 84 log.go:172] (0xc00052a000) (3) Data frame handling\nI0308 14:46:37.985710 84 log.go:172] (0xc00052a000) (3) Data frame sent\nI0308 14:46:37.985725 84 log.go:172] (0xc0001306e0) Data frame received for 3\nI0308 14:46:37.985735 84 log.go:172] (0xc00052a000) (3) Data frame handling\nI0308 14:46:37.986811 84 log.go:172] (0xc0001306e0) Data frame received for 1\nI0308 14:46:37.986823 84 log.go:172] (0xc000693720) (1) Data frame handling\nI0308 14:46:37.986832 84 log.go:172] (0xc000693720) (1) Data frame sent\nI0308 14:46:37.986841 84 log.go:172] (0xc0001306e0) (0xc000693720) Stream removed, broadcasting: 1\nI0308 14:46:37.986850 84 log.go:172] (0xc0001306e0) Go away received\nI0308 14:46:37.986977 84 log.go:172] (0xc0001306e0) (0xc000693720) Stream removed, broadcasting: 1\nI0308 14:46:37.986986 84 log.go:172] (0xc0001306e0) (0xc00052a000) Stream removed, broadcasting: 3\nI0308 14:46:37.986991 84 log.go:172] (0xc0001306e0) (0xc0000ee000) Stream removed, broadcasting: 5\n" Mar 8 14:46:37.989: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 8 14:46:37.989: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 8 14:46:38.033: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 8 14:46:48.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-74p9g ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:46:48.298: INFO: stderr: "I0308 14:46:48.209306 106 log.go:172] (0xc00081c2c0) (0xc00070c640) Create stream\nI0308 14:46:48.209339 106 log.go:172] (0xc00081c2c0) (0xc00070c640) Stream added, broadcasting: 1\nI0308 14:46:48.210685 106 log.go:172] (0xc00081c2c0) Reply frame received for 1\nI0308 14:46:48.210712 106 log.go:172] (0xc00081c2c0) (0xc00070c6e0) Create stream\nI0308 14:46:48.210722 106 log.go:172] (0xc00081c2c0) (0xc00070c6e0) Stream added, broadcasting: 3\nI0308 14:46:48.211326 106 log.go:172] (0xc00081c2c0) Reply frame received for 3\nI0308 14:46:48.211355 106 log.go:172] (0xc00081c2c0) (0xc00070c780) Create stream\nI0308 14:46:48.211364 106 log.go:172] (0xc00081c2c0) (0xc00070c780) Stream added, broadcasting: 5\nI0308 14:46:48.212271 106 log.go:172] (0xc00081c2c0) Reply frame received for 5\nI0308 14:46:48.292597 106 log.go:172] (0xc00081c2c0) Data frame received for 3\nI0308 14:46:48.292676 106 log.go:172] (0xc00070c6e0) (3) Data frame handling\nI0308 14:46:48.292717 106 log.go:172] (0xc00070c6e0) (3) Data frame sent\nI0308 14:46:48.292727 106 log.go:172] (0xc00081c2c0) Data frame received for 3\nI0308 14:46:48.292733 106 log.go:172] (0xc00070c6e0) (3) Data frame handling\nI0308 14:46:48.295473 106 log.go:172] (0xc00081c2c0) Data frame received for 5\nI0308 14:46:48.295484 106 log.go:172] (0xc00070c780) (5) Data frame handling\nI0308 14:46:48.296435 106 log.go:172] (0xc00081c2c0) Data frame received for 1\nI0308 14:46:48.296461 106 log.go:172] (0xc00070c640) (1) Data frame handling\nI0308 14:46:48.296473 106 log.go:172] (0xc00070c640) (1) Data frame sent\nI0308 14:46:48.296487 106 log.go:172] (0xc00081c2c0) (0xc00070c640) Stream removed, broadcasting: 1\nI0308 14:46:48.296503 106 log.go:172] (0xc00081c2c0) Go away received\nI0308 14:46:48.296722 106 log.go:172] (0xc00081c2c0) (0xc00070c640) Stream removed, broadcasting: 1\nI0308 14:46:48.296738 106 log.go:172] (0xc00081c2c0) (0xc00070c6e0) Stream removed, broadcasting: 3\nI0308 14:46:48.296745 106 log.go:172] (0xc00081c2c0) (0xc00070c780) Stream removed, broadcasting: 5\n" Mar 8 14:46:48.298: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 8 14:46:48.298: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 8 14:46:58.312: INFO: Waiting for StatefulSet e2e-tests-statefulset-74p9g/ss2 to complete update Mar 8 14:46:58.312: INFO: Waiting for Pod e2e-tests-statefulset-74p9g/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 8 14:46:58.312: INFO: Waiting for Pod e2e-tests-statefulset-74p9g/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Mar 8 14:47:08.317: INFO: Waiting for StatefulSet e2e-tests-statefulset-74p9g/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 8 14:47:21.678: INFO: Deleting all statefulset in ns e2e-tests-statefulset-74p9g Mar 8 14:47:24.284: INFO: Scaling statefulset ss2 to 0 Mar 8 14:47:46.009: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 14:47:46.012: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 14:47:46.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-74p9g" for this suite. Mar 8 14:47:52.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 14:47:52.095: INFO: namespace: e2e-tests-statefulset-74p9g, resource: bindings, ignored listing per whitelist Mar 8 14:47:52.136: INFO: namespace e2e-tests-statefulset-74p9g deletion completed in 6.104005377s • [SLOW TEST:155.003 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 14:47:52.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 8 14:48:02.281: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-l45cw PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 14:48:02.281: INFO: >>> kubeConfig: /root/.kube/config I0308 14:48:02.315468 6 log.go:172] (0xc00145a2c0) (0xc00166b040) Create stream I0308 14:48:02.315505 6 log.go:172] (0xc00145a2c0) (0xc00166b040) Stream added, broadcasting: 1 I0308 14:48:02.317261 6 log.go:172] (0xc00145a2c0) Reply frame received for 1 I0308 14:48:02.317298 6 log.go:172] (0xc00145a2c0) (0xc0017ab0e0) Create stream I0308 14:48:02.317311 6 log.go:172] (0xc00145a2c0) (0xc0017ab0e0) Stream added, broadcasting: 3 I0308 14:48:02.318164 6 log.go:172] (0xc00145a2c0) Reply frame received for 3 I0308 14:48:02.318205 6 log.go:172] (0xc00145a2c0) (0xc0011c6320) Create stream I0308 14:48:02.318218 6 log.go:172] (0xc00145a2c0) (0xc0011c6320) Stream added, broadcasting: 5 I0308 14:48:02.319077 6 log.go:172] (0xc00145a2c0) Reply frame received for 5 I0308 14:48:02.373163 6 log.go:172] (0xc00145a2c0) Data frame received for 5 I0308 14:48:02.373198 6 log.go:172] (0xc0011c6320) (5) Data frame handling I0308 14:48:02.373222 6 log.go:172] (0xc00145a2c0) Data frame received for 3 I0308 14:48:02.373233 6 log.go:172] (0xc0017ab0e0) (3) Data frame handling I0308 14:48:02.373244 6 log.go:172] (0xc0017ab0e0) (3) Data frame sent I0308 14:48:02.373253 6 log.go:172] (0xc00145a2c0) Data frame received for 3 I0308 14:48:02.373261 6 log.go:172] (0xc0017ab0e0) (3) Data frame handling I0308 14:48:02.375058 6 log.go:172] (0xc00145a2c0) Data frame received for 1 I0308 14:48:02.375086 6 log.go:172] (0xc00166b040) (1) Data frame handling I0308 14:48:02.375104 6 log.go:172] (0xc00166b040) (1) Data frame sent I0308 14:48:02.375119 6 log.go:172] (0xc00145a2c0) (0xc00166b040) Stream removed, broadcasting: 1 I0308 14:48:02.375135 6 log.go:172] (0xc00145a2c0) Go away received I0308 14:48:02.375325 6 log.go:172] (0xc00145a2c0) (0xc00166b040) Stream removed, broadcasting: 1 I0308 14:48:02.375345 6 log.go:172] (0xc00145a2c0) (0xc0017ab0e0) Stream removed, broadcasting: 3 I0308 14:48:02.375354 6 log.go:172] (0xc00145a2c0) (0xc0011c6320) Stream removed, broadcasting: 5 Mar 8 14:48:02.375: INFO: Exec stderr: "" Mar 8 14:48:02.375: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-l45cw PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 14:48:02.375: INFO: >>> kubeConfig: /root/.kube/config I0308 14:48:02.407406 6 log.go:172] (0xc000c542c0) (0xc0011c65a0) Create stream I0308 14:48:02.407445 6 log.go:172] (0xc000c542c0) (0xc0011c65a0) Stream added, broadcasting: 1 I0308 14:48:02.408832 6 log.go:172] (0xc000c542c0) Reply frame received for 1 I0308 14:48:02.408858 6 log.go:172] (0xc000c542c0) (0xc000ca54a0) Create stream I0308 14:48:02.408868 6 log.go:172] (0xc000c542c0) (0xc000ca54a0) Stream added, broadcasting: 3 I0308 14:48:02.409581 6 log.go:172] (0xc000c542c0) Reply frame received for 3 I0308 14:48:02.409603 6 log.go:172] (0xc000c542c0) (0xc0011c6640) Create stream I0308 14:48:02.409610 6 log.go:172] (0xc000c542c0) (0xc0011c6640) Stream added, broadcasting: 5 I0308 14:48:02.410382 6 log.go:172] (0xc000c542c0) Reply frame received for 5 I0308 14:48:02.454416 6 log.go:172] (0xc000c542c0) Data frame received for 3 I0308 14:48:02.454447 6 log.go:172] (0xc000ca54a0) (3) Data frame handling I0308 14:48:02.454457 6 log.go:172] (0xc000ca54a0) (3) Data frame sent I0308 14:48:02.454463 6 log.go:172] (0xc000c542c0) Data frame received for 3 I0308 14:48:02.454468 6 log.go:172] (0xc000ca54a0) (3) Data frame handling I0308 14:48:02.454479 6 log.go:172] (0xc000c542c0) Data frame received for 5 I0308 14:48:02.454486 6 log.go:172] (0xc0011c6640) (5) Data frame handling I0308 14:48:02.455777 6 log.go:172] (0xc000c542c0) Data frame received for 1 I0308 14:48:02.455800 6 log.go:172] (0xc0011c65a0) (1) Data frame handling I0308 14:48:02.455823 6 log.go:172] (0xc0011c65a0) (1) Data frame sent I0308 14:48:02.455840 6 log.go:172] (0xc000c542c0) (0xc0011c65a0) Stream removed, broadcasting: 1 I0308 14:48:02.455861 6 log.go:172] (0xc000c542c0) Go away received I0308 14:48:02.455982 6 log.go:172] (0xc000c542c0) (0xc0011c65a0) Stream removed, broadcasting: 1 I0308 14:48:02.456013 6 log.go:172] (0xc000c542c0) (0xc000ca54a0) Stream removed, broadcasting: 3 I0308 14:48:02.456033 6 log.go:172] (0xc000c542c0) (0xc0011c6640) Stream removed, broadcasting: 5 Mar 8 14:48:02.456: INFO: Exec stderr: "" Mar 8 14:48:02.456: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-l45cw PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 14:48:02.456: INFO: >>> kubeConfig: /root/.kube/config I0308 14:48:02.481485 6 log.go:172] (0xc000efc2c0) (0xc0017ab360) Create stream I0308 14:48:02.481507 6 log.go:172] (0xc000efc2c0) (0xc0017ab360) Stream added, broadcasting: 1 I0308 14:48:02.482959 6 log.go:172] (0xc000efc2c0) Reply frame received for 1 I0308 14:48:02.482988 6 log.go:172] (0xc000efc2c0) (0xc0017ab400) Create stream I0308 14:48:02.482997 6 log.go:172] (0xc000efc2c0) (0xc0017ab400) Stream added, broadcasting: 3 I0308 14:48:02.483768 6 log.go:172] (0xc000efc2c0) Reply frame received for 3 I0308 14:48:02.483799 6 log.go:172] (0xc000efc2c0) (0xc0017ab4a0) Create stream I0308 14:48:02.483806 6 log.go:172] (0xc000efc2c0) (0xc0017ab4a0) Stream added, broadcasting: 5 I0308 14:48:02.484539 6 log.go:172] (0xc000efc2c0) Reply frame received for 5 I0308 14:48:02.544592 6 log.go:172] (0xc000efc2c0) Data frame received for 3 I0308 14:48:02.544639 6 log.go:172] (0xc0017ab400) (3) Data frame handling I0308 14:48:02.544653 6 log.go:172] (0xc0017ab400) (3) Data frame sent I0308 14:48:02.544668 6 log.go:172] (0xc000efc2c0) Data frame received for 3 I0308 14:48:02.544675 6 log.go:172] (0xc0017ab400) (3) Data frame handling I0308 14:48:02.544701 6 log.go:172] (0xc000efc2c0) Data frame received for 5 I0308 14:48:02.544719 6 log.go:172] (0xc0017ab4a0) (5) Data frame handling I0308 14:48:02.546101 6 log.go:172] (0xc000efc2c0) Data frame received for 1 I0308 14:48:02.546158 6 log.go:172] (0xc0017ab360) (1) Data frame handling I0308 14:48:02.546192 6 log.go:172] (0xc0017ab360) (1) Data frame sent I0308 14:48:02.546419 6 log.go:172] (0xc000efc2c0) (0xc0017ab360) Stream removed, broadcasting: 1 I0308 14:48:02.546442 6 log.go:172] (0xc000efc2c0) Go away received I0308 14:48:02.546545 6 log.go:172] (0xc000efc2c0) (0xc0017ab360) Stream removed, broadcasting: 1 I0308 14:48:02.546569 6 log.go:172] (0xc000efc2c0) (0xc0017ab400) Stream removed, broadcasting: 3 I0308 14:48:02.546580 6 log.go:172] (0xc000efc2c0) (0xc0017ab4a0) Stream removed, broadcasting: 5 Mar 8 14:48:02.546: INFO: Exec stderr: "" Mar 8 14:48:02.546: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-l45cw PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 14:48:02.546: INFO: >>> kubeConfig: /root/.kube/config I0308 14:48:02.571935 6 log.go:172] (0xc000c54790) (0xc0011c68c0) Create stream I0308 14:48:02.571960 6 log.go:172] (0xc000c54790) (0xc0011c68c0) Stream added, broadcasting: 1 I0308 14:48:02.573425 6 log.go:172] (0xc000c54790) Reply frame received for 1 I0308 14:48:02.573448 6 log.go:172] (0xc000c54790) (0xc000ca5540) Create stream I0308 14:48:02.573456 6 log.go:172] (0xc000c54790) (0xc000ca5540) Stream added, broadcasting: 3 I0308 14:48:02.574200 6 log.go:172] (0xc000c54790) Reply frame received for 3 I0308 14:48:02.574230 6 log.go:172] (0xc000c54790) (0xc0017ab540) Create stream I0308 14:48:02.574239 6 log.go:172] (0xc000c54790) (0xc0017ab540) Stream added, broadcasting: 5 I0308 14:48:02.574962 6 log.go:172] (0xc000c54790) Reply frame received for 5 I0308 14:48:02.649203 6 log.go:172] (0xc000c54790) Data frame received for 5 I0308 14:48:02.649237 6 log.go:172] (0xc0017ab540) (5) Data frame handling I0308 14:48:02.649256 6 log.go:172] (0xc000c54790) Data frame received for 3 I0308 14:48:02.649266 6 log.go:172] (0xc000ca5540) (3) Data frame handling I0308 14:48:02.649273 6 log.go:172] (0xc000ca5540) (3) Data frame sent I0308 14:48:02.649282 6 log.go:172] (0xc000c54790) Data frame received for 3 I0308 14:48:02.649287 6 log.go:172] (0xc000ca5540) (3) Data frame handling I0308 14:48:02.650351 6 log.go:172] (0xc000c54790) Data frame received for 1 I0308 14:48:02.650376 6 log.go:172] (0xc0011c68c0) (1) Data frame handling I0308 14:48:02.650404 6 log.go:172] (0xc0011c68c0) (1) Data frame sent I0308 14:48:02.650423 6 log.go:172] (0xc000c54790) (0xc0011c68c0) Stream removed, broadcasting: 1 I0308 14:48:02.650447 6 log.go:172] (0xc000c54790) Go away received I0308 14:48:02.650545 6 log.go:172] (0xc000c54790) (0xc0011c68c0) Stream removed, broadcasting: 1 I0308 14:48:02.650562 6 log.go:172] (0xc000c54790) (0xc000ca5540) Stream removed, broadcasting: 3 I0308 14:48:02.650574 6 log.go:172] (0xc000c54790) (0xc0017ab540) Stream removed, broadcasting: 5 Mar 8 14:48:02.650: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 8 14:48:02.650: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-l45cw PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 14:48:02.650: INFO: >>> kubeConfig: /root/.kube/config I0308 14:48:02.679605 6 log.go:172] (0xc00145a790) (0xc00166b2c0) Create stream I0308 14:48:02.679635 6 log.go:172] (0xc00145a790) (0xc00166b2c0) Stream added, broadcasting: 1 I0308 14:48:02.683071 6 log.go:172] (0xc00145a790) Reply frame received for 1 I0308 14:48:02.683112 6 log.go:172] (0xc00145a790) (0xc000dd45a0) Create stream I0308 14:48:02.683125 6 log.go:172] (0xc00145a790) (0xc000dd45a0) Stream added, broadcasting: 3 I0308 14:48:02.686010 6 log.go:172] (0xc00145a790) Reply frame received for 3 I0308 14:48:02.686038 6 log.go:172] (0xc00145a790) (0xc000ca55e0) Create stream I0308 14:48:02.686051 6 log.go:172] (0xc00145a790) (0xc000ca55e0) Stream added, broadcasting: 5 I0308 14:48:02.686921 6 log.go:172] (0xc00145a790) Reply frame received for 5 I0308 14:48:02.756280 6 log.go:172] (0xc00145a790) Data frame received for 3 I0308 14:48:02.756308 6 log.go:172] (0xc000dd45a0) (3) Data frame handling I0308 14:48:02.756319 6 log.go:172] (0xc000dd45a0) (3) Data frame sent I0308 14:48:02.756326 6 log.go:172] (0xc00145a790) Data frame received for 3 I0308 14:48:02.756334 6 log.go:172] (0xc000dd45a0) (3) Data frame handling I0308 14:48:02.756367 6 log.go:172] (0xc00145a790) Data frame received for 5 I0308 14:48:02.756382 6 log.go:172] (0xc000ca55e0) (5) Data frame handling I0308 14:48:02.757384 6 log.go:172] (0xc00145a790) Data frame received for 1 I0308 14:48:02.757400 6 log.go:172] (0xc00166b2c0) (1) Data frame handling I0308 14:48:02.757407 6 log.go:172] (0xc00166b2c0) (1) Data frame sent I0308 14:48:02.757414 6 log.go:172] (0xc00145a790) (0xc00166b2c0) Stream removed, broadcasting: 1 I0308 14:48:02.757434 6 log.go:172] (0xc00145a790) Go away received I0308 14:48:02.757476 6 log.go:172] (0xc00145a790) (0xc00166b2c0) Stream removed, broadcasting: 1 I0308 14:48:02.757486 6 log.go:172] (0xc00145a790) (0xc000dd45a0) Stream removed, broadcasting: 3 I0308 14:48:02.757491 6 log.go:172] (0xc00145a790) (0xc000ca55e0) Stream removed, broadcasting: 5 Mar 8 14:48:02.757: INFO: Exec stderr: "" Mar 8 14:48:02.757: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-l45cw PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 14:48:02.757: INFO: >>> kubeConfig: /root/.kube/config I0308 14:48:02.776362 6 log.go:172] (0xc000efc790) (0xc0017ab7c0) Create stream I0308 14:48:02.776384 6 log.go:172] (0xc000efc790) (0xc0017ab7c0) Stream added, broadcasting: 1 I0308 14:48:02.777890 6 log.go:172] (0xc000efc790) Reply frame received for 1 I0308 14:48:02.777931 6 log.go:172] (0xc000efc790) (0xc0011c6960) Create stream I0308 14:48:02.777943 6 log.go:172] (0xc000efc790) (0xc0011c6960) Stream added, broadcasting: 3 I0308 14:48:02.778697 6 log.go:172] (0xc000efc790) Reply frame received for 3 I0308 14:48:02.778730 6 log.go:172] (0xc000efc790) (0xc000ca5680) Create stream I0308 14:48:02.778741 6 log.go:172] (0xc000efc790) (0xc000ca5680) Stream added, broadcasting: 5 I0308 14:48:02.779609 6 log.go:172] (0xc000efc790) Reply frame received for 5 I0308 14:48:02.828609 6 log.go:172] (0xc000efc790) Data frame received for 5 I0308 14:48:02.828647 6 log.go:172] (0xc000ca5680) (5) Data frame handling I0308 14:48:02.828669 6 log.go:172] (0xc000efc790) Data frame received for 3 I0308 14:48:02.828682 6 log.go:172] (0xc0011c6960) (3) Data frame handling I0308 14:48:02.828694 6 log.go:172] (0xc0011c6960) (3) Data frame sent I0308 14:48:02.828704 6 log.go:172] (0xc000efc790) Data frame received for 3 I0308 14:48:02.828713 6 log.go:172] (0xc0011c6960) (3) Data frame handling I0308 14:48:02.829708 6 log.go:172] (0xc000efc790) Data frame received for 1 I0308 14:48:02.829728 6 log.go:172] (0xc0017ab7c0) (1) Data frame handling I0308 14:48:02.829744 6 log.go:172] (0xc0017ab7c0) (1) Data frame sent I0308 14:48:02.829760 6 log.go:172] (0xc000efc790) (0xc0017ab7c0) Stream removed, broadcasting: 1 I0308 14:48:02.829786 6 log.go:172] (0xc000efc790) Go away received I0308 14:48:02.829903 6 log.go:172] (0xc000efc790) (0xc0017ab7c0) Stream removed, broadcasting: 1 I0308 14:48:02.829931 6 log.go:172] (0xc000efc790) (0xc0011c6960) Stream removed, broadcasting: 3 I0308 14:48:02.829938 6 log.go:172] (0xc000efc790) (0xc000ca5680) Stream removed, broadcasting: 5 Mar 8 14:48:02.829: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 8 14:48:02.829: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-l45cw PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 14:48:02.829: INFO: >>> kubeConfig: /root/.kube/config I0308 14:48:02.851547 6 log.go:172] (0xc00131a2c0) (0xc000dd4820) Create stream I0308 14:48:02.851565 6 log.go:172] (0xc00131a2c0) (0xc000dd4820) Stream added, broadcasting: 1 I0308 14:48:02.855176 6 log.go:172] (0xc00131a2c0) Reply frame received for 1 I0308 14:48:02.855218 6 log.go:172] (0xc00131a2c0) (0xc000ca4000) Create stream I0308 14:48:02.855234 6 log.go:172] (0xc00131a2c0) (0xc000ca4000) Stream added, broadcasting: 3 I0308 14:48:02.856004 6 log.go:172] (0xc00131a2c0) Reply frame received for 3 I0308 14:48:02.856030 6 log.go:172] (0xc00131a2c0) (0xc001710000) Create stream I0308 14:48:02.856039 6 log.go:172] (0xc00131a2c0) (0xc001710000) Stream added, broadcasting: 5 I0308 14:48:02.856822 6 log.go:172] (0xc00131a2c0) Reply frame received for 5 I0308 14:48:02.907462 6 log.go:172] (0xc00131a2c0) Data frame received for 5 I0308 14:48:02.907491 6 log.go:172] (0xc001710000) (5) Data frame handling I0308 14:48:02.907517 6 log.go:172] (0xc00131a2c0) Data frame received for 3 I0308 14:48:02.907542 6 log.go:172] (0xc000ca4000) (3) Data frame handling I0308 14:48:02.907555 6 log.go:172] (0xc000ca4000) (3) Data frame sent I0308 14:48:02.907563 6 log.go:172] (0xc00131a2c0) Data frame received for 3 I0308 14:48:02.907567 6 log.go:172] (0xc000ca4000) (3) Data frame handling I0308 14:48:02.908835 6 log.go:172] (0xc00131a2c0) Data frame received for 1 I0308 14:48:02.908863 6 log.go:172] (0xc000dd4820) (1) Data frame handling I0308 14:48:02.908882 6 log.go:172] (0xc000dd4820) (1) Data frame sent I0308 14:48:02.908909 6 log.go:172] (0xc00131a2c0) (0xc000dd4820) Stream removed, broadcasting: 1 I0308 14:48:02.909011 6 log.go:172] (0xc00131a2c0) (0xc000dd4820) Stream removed, broadcasting: 1 I0308 14:48:02.909032 6 log.go:172] (0xc00131a2c0) (0xc000ca4000) Stream removed, broadcasting: 3 I0308 14:48:02.909158 6 log.go:172] (0xc00131a2c0) Go away received I0308 14:48:02.909302 6 log.go:172] (0xc00131a2c0) (0xc001710000) Stream removed, broadcasting: 5 Mar 8 14:48:02.909: INFO: Exec stderr: "" Mar 8 14:48:02.909: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-l45cw PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 14:48:02.909: INFO: >>> kubeConfig: /root/.kube/config I0308 14:48:02.928338 6 log.go:172] (0xc0000eb290) (0xc0013841e0) Create stream I0308 14:48:02.928359 6 log.go:172] (0xc0000eb290) (0xc0013841e0) Stream added, broadcasting: 1 I0308 14:48:02.929533 6 log.go:172] (0xc0000eb290) Reply frame received for 1 I0308 14:48:02.929563 6 log.go:172] (0xc0000eb290) (0xc000ca40a0) Create stream I0308 14:48:02.929585 6 log.go:172] (0xc0000eb290) (0xc000ca40a0) Stream added, broadcasting: 3 I0308 14:48:02.930423 6 log.go:172] (0xc0000eb290) Reply frame received for 3 I0308 14:48:02.930472 6 log.go:172] (0xc0000eb290) (0xc0017100a0) Create stream I0308 14:48:02.930485 6 log.go:172] (0xc0000eb290) (0xc0017100a0) Stream added, broadcasting: 5 I0308 14:48:02.931195 6 log.go:172] (0xc0000eb290) Reply frame received for 5 I0308 14:48:02.984137 6 log.go:172] (0xc0000eb290) Data frame received for 5 I0308 14:48:02.984174 6 log.go:172] (0xc0017100a0) (5) Data frame handling I0308 14:48:02.984198 6 log.go:172] (0xc0000eb290) Data frame received for 3 I0308 14:48:02.984207 6 log.go:172] (0xc000ca40a0) (3) Data frame handling I0308 14:48:02.984217 6 log.go:172] (0xc000ca40a0) (3) Data frame sent I0308 14:48:02.984227 6 log.go:172] (0xc0000eb290) Data frame received for 3 I0308 14:48:02.984236 6 log.go:172] (0xc000ca40a0) (3) Data frame handling I0308 14:48:02.985544 6 log.go:172] (0xc0000eb290) Data frame received for 1 I0308 14:48:02.985602 6 log.go:172] (0xc0013841e0) (1) Data frame handling I0308 14:48:02.985652 6 log.go:172] (0xc0013841e0) (1) Data frame sent I0308 14:48:02.985671 6 log.go:172] (0xc0000eb290) (0xc0013841e0) Stream removed, broadcasting: 1 I0308 14:48:02.985689 6 log.go:172] (0xc0000eb290) Go away received I0308 14:48:02.985834 6 log.go:172] (0xc0000eb290) (0xc0013841e0) Stream removed, broadcasting: 1 I0308 14:48:02.985855 6 log.go:172] (0xc0000eb290) (0xc000ca40a0) Stream removed, broadcasting: 3 I0308 14:48:02.985863 6 log.go:172] (0xc0000eb290) (0xc0017100a0) Stream removed, broadcasting: 5 Mar 8 14:48:02.985: INFO: Exec stderr: "" Mar 8 14:48:02.985: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-l45cw PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 14:48:02.985: INFO: >>> kubeConfig: /root/.kube/config I0308 14:48:03.011891 6 log.go:172] (0xc000c542c0) (0xc00033a5a0) Create stream I0308 14:48:03.011916 6 log.go:172] (0xc000c542c0) (0xc00033a5a0) Stream added, broadcasting: 1 I0308 14:48:03.013524 6 log.go:172] (0xc000c542c0) Reply frame received for 1 I0308 14:48:03.013552 6 log.go:172] (0xc000c542c0) (0xc001384280) Create stream I0308 14:48:03.013564 6 log.go:172] (0xc000c542c0) (0xc001384280) Stream added, broadcasting: 3 I0308 14:48:03.014452 6 log.go:172] (0xc000c542c0) Reply frame received for 3 I0308 14:48:03.014485 6 log.go:172] (0xc000c542c0) (0xc001384320) Create stream I0308 14:48:03.014494 6 log.go:172] (0xc000c542c0) (0xc001384320) Stream added, broadcasting: 5 I0308 14:48:03.015312 6 log.go:172] (0xc000c542c0) Reply frame received for 5 I0308 14:48:03.067947 6 log.go:172] (0xc000c542c0) Data frame received for 5 I0308 14:48:03.067981 6 log.go:172] (0xc001384320) (5) Data frame handling I0308 14:48:03.068014 6 log.go:172] (0xc000c542c0) Data frame received for 3 I0308 14:48:03.068054 6 log.go:172] (0xc001384280) (3) Data frame handling I0308 14:48:03.068079 6 log.go:172] (0xc001384280) (3) Data frame sent I0308 14:48:03.068095 6 log.go:172] (0xc000c542c0) Data frame received for 3 I0308 14:48:03.068108 6 log.go:172] (0xc001384280) (3) Data frame handling I0308 14:48:03.069585 6 log.go:172] (0xc000c542c0) Data frame received for 1 I0308 14:48:03.069618 6 log.go:172] (0xc00033a5a0) (1) Data frame handling I0308 14:48:03.069649 6 log.go:172] (0xc00033a5a0) (1) Data frame sent I0308 14:48:03.069687 6 log.go:172] (0xc000c542c0) (0xc00033a5a0) Stream removed, broadcasting: 1 I0308 14:48:03.069703 6 log.go:172] (0xc000c542c0) Go away received I0308 14:48:03.069785 6 log.go:172] (0xc000c542c0) (0xc00033a5a0) Stream removed, broadcasting: 1 I0308 14:48:03.069801 6 log.go:172] (0xc000c542c0) (0xc001384280) Stream removed, broadcasting: 3 I0308 14:48:03.069810 6 log.go:172] (0xc000c542c0) (0xc001384320) Stream removed, broadcasting: 5 Mar 8 14:48:03.069: INFO: Exec stderr: "" Mar 8 14:48:03.069: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-l45cw PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 14:48:03.069: INFO: >>> kubeConfig: /root/.kube/config I0308 14:48:03.093614 6 log.go:172] (0xc00131a370) (0xc001710320) Create stream I0308 14:48:03.093640 6 log.go:172] (0xc00131a370) (0xc001710320) Stream added, broadcasting: 1 I0308 14:48:03.095181 6 log.go:172] (0xc00131a370) Reply frame received for 1 I0308 14:48:03.095212 6 log.go:172] (0xc00131a370) (0xc00033aaa0) Create stream I0308 14:48:03.095221 6 log.go:172] (0xc00131a370) (0xc00033aaa0) Stream added, broadcasting: 3 I0308 14:48:03.095924 6 log.go:172] (0xc00131a370) Reply frame received for 3 I0308 14:48:03.095949 6 log.go:172] (0xc00131a370) (0xc000195720) Create stream I0308 14:48:03.095960 6 log.go:172] (0xc00131a370) (0xc000195720) Stream added, broadcasting: 5 I0308 14:48:03.096747 6 log.go:172] (0xc00131a370) Reply frame received for 5 I0308 14:48:03.160556 6 log.go:172] (0xc00131a370) Data frame received for 5 I0308 14:48:03.160599 6 log.go:172] (0xc000195720) (5) Data frame handling I0308 14:48:03.160626 6 log.go:172] (0xc00131a370) Data frame received for 3 I0308 14:48:03.160638 6 log.go:172] (0xc00033aaa0) (3) Data frame handling I0308 14:48:03.160658 6 log.go:172] (0xc00033aaa0) (3) Data frame sent I0308 14:48:03.160670 6 log.go:172] (0xc00131a370) Data frame received for 3 I0308 14:48:03.160683 6 log.go:172] (0xc00033aaa0) (3) Data frame handling I0308 14:48:03.161923 6 log.go:172] (0xc00131a370) Data frame received for 1 I0308 14:48:03.161945 6 log.go:172] (0xc001710320) (1) Data frame handling I0308 14:48:03.161953 6 log.go:172] (0xc001710320) (1) Data frame sent I0308 14:48:03.161965 6 log.go:172] (0xc00131a370) (0xc001710320) Stream removed, broadcasting: 1 I0308 14:48:03.161983 6 log.go:172] (0xc00131a370) Go away received I0308 14:48:03.162112 6 log.go:172] (0xc00131a370) (0xc001710320) Stream removed, broadcasting: 1 I0308 14:48:03.162160 6 log.go:172] (0xc00131a370) (0xc00033aaa0) Stream removed, broadcasting: 3 I0308 14:48:03.162167 6 log.go:172] (0xc00131a370) (0xc000195720) Stream removed, broadcasting: 5 Mar 8 14:48:03.162: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 14:48:03.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-l45cw" for this suite. Mar 8 14:48:41.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 14:48:41.213: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-l45cw, resource: bindings, ignored listing per whitelist Mar 8 14:48:41.258: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-l45cw deletion completed in 38.092840046s • [SLOW TEST:49.121 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should test kubelet managed /etc/hosts file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 14:48:41.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-kxxxs Mar 8 14:48:45.546: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-kxxxs STEP: checking the pod's current state and verifying that restartCount is present Mar 8 14:48:45.550: INFO: Initial restart count of pod liveness-http is 0 Mar 8 14:49:09.812: INFO: Restart count of pod e2e-tests-container-probe-kxxxs/liveness-http is now 1 (24.26238033s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 14:49:09.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-kxxxs" for this suite. Mar 8 14:49:15.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 14:49:15.938: INFO: namespace: e2e-tests-container-probe-kxxxs, resource: bindings, ignored listing per whitelist Mar 8 14:49:15.949: INFO: namespace e2e-tests-container-probe-kxxxs deletion completed in 6.09705148s • [SLOW TEST:34.691 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 14:49:15.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-k8gqr [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-k8gqr STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-k8gqr Mar 8 14:49:16.140: INFO: Found 0 stateful pods, waiting for 1 Mar 8 14:49:26.170: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Mar 8 14:49:36.144: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 8 14:49:36.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 8 14:49:36.358: INFO: stderr: "I0308 14:49:36.269029 129 log.go:172] (0xc000138840) (0xc000699360) Create stream\nI0308 14:49:36.269071 129 log.go:172] (0xc000138840) (0xc000699360) Stream added, broadcasting: 1\nI0308 14:49:36.270671 129 log.go:172] (0xc000138840) Reply frame received for 1\nI0308 14:49:36.270712 129 log.go:172] (0xc000138840) (0xc000699400) Create stream\nI0308 14:49:36.270723 129 log.go:172] (0xc000138840) (0xc000699400) Stream added, broadcasting: 3\nI0308 14:49:36.271777 129 log.go:172] (0xc000138840) Reply frame received for 3\nI0308 14:49:36.271805 129 log.go:172] (0xc000138840) (0xc0006994a0) Create stream\nI0308 14:49:36.271817 129 log.go:172] (0xc000138840) (0xc0006994a0) Stream added, broadcasting: 5\nI0308 14:49:36.273474 129 log.go:172] (0xc000138840) Reply frame received for 5\nI0308 14:49:36.354972 129 log.go:172] (0xc000138840) Data frame received for 3\nI0308 14:49:36.355003 129 log.go:172] (0xc000699400) (3) Data frame handling\nI0308 14:49:36.355027 129 log.go:172] (0xc000699400) (3) Data frame sent\nI0308 14:49:36.355041 129 log.go:172] (0xc000138840) Data frame received for 3\nI0308 14:49:36.355051 129 log.go:172] (0xc000699400) (3) Data frame handling\nI0308 14:49:36.355187 129 log.go:172] (0xc000138840) Data frame received for 5\nI0308 14:49:36.355203 129 log.go:172] (0xc0006994a0) (5) Data frame handling\nI0308 14:49:36.356810 129 log.go:172] (0xc000138840) Data frame received for 1\nI0308 14:49:36.356840 129 log.go:172] (0xc000699360) (1) Data frame handling\nI0308 14:49:36.356853 129 log.go:172] (0xc000699360) (1) Data frame sent\nI0308 14:49:36.356870 129 log.go:172] (0xc000138840) (0xc000699360) Stream removed, broadcasting: 1\nI0308 14:49:36.356884 129 log.go:172] (0xc000138840) Go away received\nI0308 14:49:36.357115 129 log.go:172] (0xc000138840) (0xc000699360) Stream removed, broadcasting: 1\nI0308 14:49:36.357144 129 log.go:172] (0xc000138840) (0xc000699400) Stream removed, broadcasting: 3\nI0308 14:49:36.357152 129 log.go:172] (0xc000138840) (0xc0006994a0) Stream removed, broadcasting: 5\n" Mar 8 14:49:36.359: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 8 14:49:36.359: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 8 14:49:36.362: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 8 14:49:46.374: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 8 14:49:46.374: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 14:49:46.390: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 14:49:46.390: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC }] Mar 8 14:49:46.391: INFO: Mar 8 14:49:46.391: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 8 14:49:47.394: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991125253s Mar 8 14:49:48.422: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987802121s Mar 8 14:49:49.426: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.959608262s Mar 8 14:49:51.771: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.95536604s Mar 8 14:49:52.774: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.61095283s Mar 8 14:49:53.781: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.607885443s Mar 8 14:49:54.785: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.600291221s Mar 8 14:49:55.795: INFO: Verifying statefulset ss doesn't scale past 3 for another 597.023386ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-k8gqr Mar 8 14:49:56.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:49:56.929: INFO: stderr: "I0308 14:49:56.885967 151 log.go:172] (0xc00084ebb0) (0xc0008bea00) Create stream\nI0308 14:49:56.886003 151 log.go:172] (0xc00084ebb0) (0xc0008bea00) Stream added, broadcasting: 1\nI0308 14:49:56.893086 151 log.go:172] (0xc00084ebb0) Reply frame received for 1\nI0308 14:49:56.893116 151 log.go:172] (0xc00084ebb0) (0xc0007fa460) Create stream\nI0308 14:49:56.893126 151 log.go:172] (0xc00084ebb0) (0xc0007fa460) Stream added, broadcasting: 3\nI0308 14:49:56.893680 151 log.go:172] (0xc00084ebb0) Reply frame received for 3\nI0308 14:49:56.893698 151 log.go:172] (0xc00084ebb0) (0xc0007fa500) Create stream\nI0308 14:49:56.893703 151 log.go:172] (0xc00084ebb0) (0xc0007fa500) Stream added, broadcasting: 5\nI0308 14:49:56.894262 151 log.go:172] (0xc00084ebb0) Reply frame received for 5\nI0308 14:49:56.925140 151 log.go:172] (0xc00084ebb0) Data frame received for 5\nI0308 14:49:56.925164 151 log.go:172] (0xc0007fa500) (5) Data frame handling\nI0308 14:49:56.925189 151 log.go:172] (0xc00084ebb0) Data frame received for 3\nI0308 14:49:56.925211 151 log.go:172] (0xc0007fa460) (3) Data frame handling\nI0308 14:49:56.925229 151 log.go:172] (0xc0007fa460) (3) Data frame sent\nI0308 14:49:56.925238 151 log.go:172] (0xc00084ebb0) Data frame received for 3\nI0308 14:49:56.925245 151 log.go:172] (0xc0007fa460) (3) Data frame handling\nI0308 14:49:56.925777 151 log.go:172] (0xc00084ebb0) Data frame received for 1\nI0308 14:49:56.925806 151 log.go:172] (0xc0008bea00) (1) Data frame handling\nI0308 14:49:56.925819 151 log.go:172] (0xc0008bea00) (1) Data frame sent\nI0308 14:49:56.925831 151 log.go:172] (0xc00084ebb0) (0xc0008bea00) Stream removed, broadcasting: 1\nI0308 14:49:56.925844 151 log.go:172] (0xc00084ebb0) Go away received\nI0308 14:49:56.926006 151 log.go:172] (0xc00084ebb0) (0xc0008bea00) Stream removed, broadcasting: 1\nI0308 14:49:56.926018 151 log.go:172] (0xc00084ebb0) (0xc0007fa460) Stream removed, broadcasting: 3\nI0308 14:49:56.926026 151 log.go:172] (0xc00084ebb0) (0xc0007fa500) Stream removed, broadcasting: 5\n" Mar 8 14:49:56.930: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 8 14:49:56.930: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 8 14:49:56.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:49:57.073: INFO: stderr: "I0308 14:49:57.006091 173 log.go:172] (0xc0001380b0) (0xc0004f5400) Create stream\nI0308 14:49:57.006159 173 log.go:172] (0xc0001380b0) (0xc0004f5400) Stream added, broadcasting: 1\nI0308 14:49:57.012279 173 log.go:172] (0xc0001380b0) Reply frame received for 1\nI0308 14:49:57.012315 173 log.go:172] (0xc0001380b0) (0xc000582000) Create stream\nI0308 14:49:57.012322 173 log.go:172] (0xc0001380b0) (0xc000582000) Stream added, broadcasting: 3\nI0308 14:49:57.012878 173 log.go:172] (0xc0001380b0) Reply frame received for 3\nI0308 14:49:57.012907 173 log.go:172] (0xc0001380b0) (0xc0004f54a0) Create stream\nI0308 14:49:57.012914 173 log.go:172] (0xc0001380b0) (0xc0004f54a0) Stream added, broadcasting: 5\nI0308 14:49:57.013413 173 log.go:172] (0xc0001380b0) Reply frame received for 5\nI0308 14:49:57.070514 173 log.go:172] (0xc0001380b0) Data frame received for 5\nI0308 14:49:57.070552 173 log.go:172] (0xc0001380b0) Data frame received for 3\nI0308 14:49:57.070570 173 log.go:172] (0xc000582000) (3) Data frame handling\nI0308 14:49:57.070580 173 log.go:172] (0xc000582000) (3) Data frame sent\nI0308 14:49:57.070586 173 log.go:172] (0xc0001380b0) Data frame received for 3\nI0308 14:49:57.070589 173 log.go:172] (0xc000582000) (3) Data frame handling\nI0308 14:49:57.070612 173 log.go:172] (0xc0004f54a0) (5) Data frame handling\nI0308 14:49:57.070635 173 log.go:172] (0xc0004f54a0) (5) Data frame sent\nI0308 14:49:57.070647 173 log.go:172] (0xc0001380b0) Data frame received for 5\nI0308 14:49:57.070653 173 log.go:172] (0xc0004f54a0) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0308 14:49:57.071323 173 log.go:172] (0xc0001380b0) Data frame received for 1\nI0308 14:49:57.071332 173 log.go:172] (0xc0004f5400) (1) Data frame handling\nI0308 14:49:57.071339 173 log.go:172] (0xc0004f5400) (1) Data frame sent\nI0308 14:49:57.071449 173 log.go:172] (0xc0001380b0) (0xc0004f5400) Stream removed, broadcasting: 1\nI0308 14:49:57.071464 173 log.go:172] (0xc0001380b0) Go away received\nI0308 14:49:57.071681 173 log.go:172] (0xc0001380b0) (0xc0004f5400) Stream removed, broadcasting: 1\nI0308 14:49:57.071689 173 log.go:172] (0xc0001380b0) (0xc000582000) Stream removed, broadcasting: 3\nI0308 14:49:57.071693 173 log.go:172] (0xc0001380b0) (0xc0004f54a0) Stream removed, broadcasting: 5\n" Mar 8 14:49:57.073: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 8 14:49:57.073: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 8 14:49:57.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:49:57.185: INFO: stderr: "I0308 14:49:57.149334 195 log.go:172] (0xc00013a790) (0xc000756640) Create stream\nI0308 14:49:57.149370 195 log.go:172] (0xc00013a790) (0xc000756640) Stream added, broadcasting: 1\nI0308 14:49:57.151396 195 log.go:172] (0xc00013a790) Reply frame received for 1\nI0308 14:49:57.151427 195 log.go:172] (0xc00013a790) (0xc000666c80) Create stream\nI0308 14:49:57.151434 195 log.go:172] (0xc00013a790) (0xc000666c80) Stream added, broadcasting: 3\nI0308 14:49:57.152342 195 log.go:172] (0xc00013a790) Reply frame received for 3\nI0308 14:49:57.152362 195 log.go:172] (0xc00013a790) (0xc0002c4000) Create stream\nI0308 14:49:57.152368 195 log.go:172] (0xc00013a790) (0xc0002c4000) Stream added, broadcasting: 5\nI0308 14:49:57.152845 195 log.go:172] (0xc00013a790) Reply frame received for 5\nI0308 14:49:57.182488 195 log.go:172] (0xc00013a790) Data frame received for 5\nI0308 14:49:57.182518 195 log.go:172] (0xc0002c4000) (5) Data frame handling\nI0308 14:49:57.182530 195 log.go:172] (0xc0002c4000) (5) Data frame sent\nI0308 14:49:57.182537 195 log.go:172] (0xc00013a790) Data frame received for 5\nI0308 14:49:57.182541 195 log.go:172] (0xc0002c4000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0308 14:49:57.182561 195 log.go:172] (0xc00013a790) Data frame received for 3\nI0308 14:49:57.182570 195 log.go:172] (0xc000666c80) (3) Data frame handling\nI0308 14:49:57.182577 195 log.go:172] (0xc000666c80) (3) Data frame sent\nI0308 14:49:57.182584 195 log.go:172] (0xc00013a790) Data frame received for 3\nI0308 14:49:57.182589 195 log.go:172] (0xc000666c80) (3) Data frame handling\nI0308 14:49:57.183320 195 log.go:172] (0xc00013a790) Data frame received for 1\nI0308 14:49:57.183343 195 log.go:172] (0xc000756640) (1) Data frame handling\nI0308 14:49:57.183352 195 log.go:172] (0xc000756640) (1) Data frame sent\nI0308 14:49:57.183365 195 log.go:172] (0xc00013a790) (0xc000756640) Stream removed, broadcasting: 1\nI0308 14:49:57.183384 195 log.go:172] (0xc00013a790) Go away received\nI0308 14:49:57.183542 195 log.go:172] (0xc00013a790) (0xc000756640) Stream removed, broadcasting: 1\nI0308 14:49:57.183557 195 log.go:172] (0xc00013a790) (0xc000666c80) Stream removed, broadcasting: 3\nI0308 14:49:57.183565 195 log.go:172] (0xc00013a790) (0xc0002c4000) Stream removed, broadcasting: 5\n" Mar 8 14:49:57.185: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 8 14:49:57.186: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 8 14:49:57.189: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 14:49:57.189: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 14:49:57.189: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 8 14:49:57.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 8 14:49:57.333: INFO: stderr: "I0308 14:49:57.279008 218 log.go:172] (0xc00013a580) (0xc0006d46e0) Create stream\nI0308 14:49:57.279045 218 log.go:172] (0xc00013a580) (0xc0006d46e0) Stream added, broadcasting: 1\nI0308 14:49:57.281265 218 log.go:172] (0xc00013a580) Reply frame received for 1\nI0308 14:49:57.281304 218 log.go:172] (0xc00013a580) (0xc0003f8c80) Create stream\nI0308 14:49:57.281312 218 log.go:172] (0xc00013a580) (0xc0003f8c80) Stream added, broadcasting: 3\nI0308 14:49:57.282422 218 log.go:172] (0xc00013a580) Reply frame received for 3\nI0308 14:49:57.282442 218 log.go:172] (0xc00013a580) (0xc0006d4780) Create stream\nI0308 14:49:57.282448 218 log.go:172] (0xc00013a580) (0xc0006d4780) Stream added, broadcasting: 5\nI0308 14:49:57.283020 218 log.go:172] (0xc00013a580) Reply frame received for 5\nI0308 14:49:57.330812 218 log.go:172] (0xc00013a580) Data frame received for 5\nI0308 14:49:57.330828 218 log.go:172] (0xc0006d4780) (5) Data frame handling\nI0308 14:49:57.330844 218 log.go:172] (0xc00013a580) Data frame received for 3\nI0308 14:49:57.330852 218 log.go:172] (0xc0003f8c80) (3) Data frame handling\nI0308 14:49:57.330858 218 log.go:172] (0xc0003f8c80) (3) Data frame sent\nI0308 14:49:57.330862 218 log.go:172] (0xc00013a580) Data frame received for 3\nI0308 14:49:57.330865 218 log.go:172] (0xc0003f8c80) (3) Data frame handling\nI0308 14:49:57.331399 218 log.go:172] (0xc00013a580) Data frame received for 1\nI0308 14:49:57.331424 218 log.go:172] (0xc0006d46e0) (1) Data frame handling\nI0308 14:49:57.331437 218 log.go:172] (0xc0006d46e0) (1) Data frame sent\nI0308 14:49:57.331446 218 log.go:172] (0xc00013a580) (0xc0006d46e0) Stream removed, broadcasting: 1\nI0308 14:49:57.331463 218 log.go:172] (0xc00013a580) Go away received\nI0308 14:49:57.331562 218 log.go:172] (0xc00013a580) (0xc0006d46e0) Stream removed, broadcasting: 1\nI0308 14:49:57.331574 218 log.go:172] (0xc00013a580) (0xc0003f8c80) Stream removed, broadcasting: 3\nI0308 14:49:57.331579 218 log.go:172] (0xc00013a580) (0xc0006d4780) Stream removed, broadcasting: 5\n" Mar 8 14:49:57.333: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 8 14:49:57.333: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 8 14:49:57.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 8 14:49:57.495: INFO: stderr: "I0308 14:49:57.418368 239 log.go:172] (0xc0006c4420) (0xc0000d9400) Create stream\nI0308 14:49:57.418419 239 log.go:172] (0xc0006c4420) (0xc0000d9400) Stream added, broadcasting: 1\nI0308 14:49:57.419620 239 log.go:172] (0xc0006c4420) Reply frame received for 1\nI0308 14:49:57.419644 239 log.go:172] (0xc0006c4420) (0xc0002d8000) Create stream\nI0308 14:49:57.419651 239 log.go:172] (0xc0006c4420) (0xc0002d8000) Stream added, broadcasting: 3\nI0308 14:49:57.420085 239 log.go:172] (0xc0006c4420) Reply frame received for 3\nI0308 14:49:57.420104 239 log.go:172] (0xc0006c4420) (0xc0000d94a0) Create stream\nI0308 14:49:57.420112 239 log.go:172] (0xc0006c4420) (0xc0000d94a0) Stream added, broadcasting: 5\nI0308 14:49:57.420645 239 log.go:172] (0xc0006c4420) Reply frame received for 5\nI0308 14:49:57.492489 239 log.go:172] (0xc0006c4420) Data frame received for 3\nI0308 14:49:57.492515 239 log.go:172] (0xc0002d8000) (3) Data frame handling\nI0308 14:49:57.492528 239 log.go:172] (0xc0002d8000) (3) Data frame sent\nI0308 14:49:57.492534 239 log.go:172] (0xc0006c4420) Data frame received for 3\nI0308 14:49:57.492539 239 log.go:172] (0xc0002d8000) (3) Data frame handling\nI0308 14:49:57.492814 239 log.go:172] (0xc0006c4420) Data frame received for 5\nI0308 14:49:57.492832 239 log.go:172] (0xc0000d94a0) (5) Data frame handling\nI0308 14:49:57.493628 239 log.go:172] (0xc0006c4420) Data frame received for 1\nI0308 14:49:57.493641 239 log.go:172] (0xc0000d9400) (1) Data frame handling\nI0308 14:49:57.493651 239 log.go:172] (0xc0000d9400) (1) Data frame sent\nI0308 14:49:57.493667 239 log.go:172] (0xc0006c4420) (0xc0000d9400) Stream removed, broadcasting: 1\nI0308 14:49:57.493791 239 log.go:172] (0xc0006c4420) (0xc0000d9400) Stream removed, broadcasting: 1\nI0308 14:49:57.493803 239 log.go:172] (0xc0006c4420) (0xc0002d8000) Stream removed, broadcasting: 3\nI0308 14:49:57.493810 239 log.go:172] (0xc0006c4420) (0xc0000d94a0) Stream removed, broadcasting: 5\n" Mar 8 14:49:57.495: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 8 14:49:57.495: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 8 14:49:57.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 8 14:49:57.641: INFO: stderr: "I0308 14:49:57.571631 261 log.go:172] (0xc00013a840) (0xc000209360) Create stream\nI0308 14:49:57.571666 261 log.go:172] (0xc00013a840) (0xc000209360) Stream added, broadcasting: 1\nI0308 14:49:57.572862 261 log.go:172] (0xc00013a840) Reply frame received for 1\nI0308 14:49:57.572888 261 log.go:172] (0xc00013a840) (0xc00074c000) Create stream\nI0308 14:49:57.572898 261 log.go:172] (0xc00013a840) (0xc00074c000) Stream added, broadcasting: 3\nI0308 14:49:57.573437 261 log.go:172] (0xc00013a840) Reply frame received for 3\nI0308 14:49:57.573455 261 log.go:172] (0xc00013a840) (0xc000209400) Create stream\nI0308 14:49:57.573463 261 log.go:172] (0xc00013a840) (0xc000209400) Stream added, broadcasting: 5\nI0308 14:49:57.573936 261 log.go:172] (0xc00013a840) Reply frame received for 5\nI0308 14:49:57.639436 261 log.go:172] (0xc00013a840) Data frame received for 3\nI0308 14:49:57.639458 261 log.go:172] (0xc00074c000) (3) Data frame handling\nI0308 14:49:57.639467 261 log.go:172] (0xc00074c000) (3) Data frame sent\nI0308 14:49:57.639567 261 log.go:172] (0xc00013a840) Data frame received for 3\nI0308 14:49:57.639582 261 log.go:172] (0xc00074c000) (3) Data frame handling\nI0308 14:49:57.639951 261 log.go:172] (0xc00013a840) Data frame received for 5\nI0308 14:49:57.639961 261 log.go:172] (0xc000209400) (5) Data frame handling\nI0308 14:49:57.640619 261 log.go:172] (0xc00013a840) Data frame received for 1\nI0308 14:49:57.640627 261 log.go:172] (0xc000209360) (1) Data frame handling\nI0308 14:49:57.640641 261 log.go:172] (0xc000209360) (1) Data frame sent\nI0308 14:49:57.640650 261 log.go:172] (0xc00013a840) (0xc000209360) Stream removed, broadcasting: 1\nI0308 14:49:57.640711 261 log.go:172] (0xc00013a840) Go away received\nI0308 14:49:57.640742 261 log.go:172] (0xc00013a840) (0xc000209360) Stream removed, broadcasting: 1\nI0308 14:49:57.640749 261 log.go:172] (0xc00013a840) (0xc00074c000) Stream removed, broadcasting: 3\nI0308 14:49:57.640755 261 log.go:172] (0xc00013a840) (0xc000209400) Stream removed, broadcasting: 5\n" Mar 8 14:49:57.642: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 8 14:49:57.642: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 8 14:49:57.642: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 14:49:57.643: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 8 14:50:07.651: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 8 14:50:07.652: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 8 14:50:07.652: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 8 14:50:07.691: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 14:50:07.691: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC }] Mar 8 14:50:07.691: INFO: ss-1 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC }] Mar 8 14:50:07.691: INFO: ss-2 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC }] Mar 8 14:50:07.691: INFO: Mar 8 14:50:07.691: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 14:50:08.695: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 14:50:08.695: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC }] Mar 8 14:50:08.695: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC }] Mar 8 14:50:08.695: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC }] Mar 8 14:50:08.695: INFO: Mar 8 14:50:08.695: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 14:50:09.698: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 14:50:09.698: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC }] Mar 8 14:50:09.698: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC }] Mar 8 14:50:09.698: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC }] Mar 8 14:50:09.698: INFO: Mar 8 14:50:09.698: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 14:50:10.702: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 14:50:10.703: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC }] Mar 8 14:50:10.703: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC }] Mar 8 14:50:10.703: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC }] Mar 8 14:50:10.703: INFO: Mar 8 14:50:10.703: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 14:50:11.708: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 14:50:11.708: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC }] Mar 8 14:50:11.708: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC }] Mar 8 14:50:11.708: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC }] Mar 8 14:50:11.708: INFO: Mar 8 14:50:11.708: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 14:50:12.727: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 14:50:12.727: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC }] Mar 8 14:50:12.727: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC }] Mar 8 14:50:12.727: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC }] Mar 8 14:50:12.727: INFO: Mar 8 14:50:12.727: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 14:50:13.732: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 14:50:13.732: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC }] Mar 8 14:50:13.732: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC }] Mar 8 14:50:13.732: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC }] Mar 8 14:50:13.732: INFO: Mar 8 14:50:13.732: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 14:50:14.800: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 14:50:14.800: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC }] Mar 8 14:50:14.800: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC }] Mar 8 14:50:14.800: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC }] Mar 8 14:50:14.800: INFO: Mar 8 14:50:14.800: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 14:50:15.804: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 14:50:15.804: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC }] Mar 8 14:50:15.804: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC }] Mar 8 14:50:15.804: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC }] Mar 8 14:50:15.804: INFO: Mar 8 14:50:15.804: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 8 14:50:16.808: INFO: POD NODE PHASE GRACE CONDITIONS Mar 8 14:50:16.808: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:28 +0000 UTC }] Mar 8 14:50:16.808: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC }] Mar 8 14:50:16.808: INFO: ss-2 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 14:49:46 +0000 UTC }] Mar 8 14:50:16.808: INFO: Mar 8 14:50:16.808: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-k8gqr Mar 8 14:50:17.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:50:18.630: INFO: rc: 1 Mar 8 14:50:18.630: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001527a70 exit status 1 true [0xc00010e3d8 0xc00010e4b8 0xc00010e738] [0xc00010e3d8 0xc00010e4b8 0xc00010e738] [0xc00010e4a8 0xc00010e668] [0x935700 0x935700] 0xc000ff35c0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Mar 8 14:50:28.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:50:28.725: INFO: rc: 1 Mar 8 14:50:28.725: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001527b90 exit status 1 true [0xc00010e778 0xc00010e7d8 0xc00010e838] [0xc00010e778 0xc00010e7d8 0xc00010e838] [0xc00010e7b0 0xc00010e818] [0x935700 0x935700] 0xc0013381e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:50:38.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:50:38.845: INFO: rc: 1 Mar 8 14:50:38.845: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ab9a40 exit status 1 true [0xc0006a8250 0xc0006a8268 0xc0006a8280] [0xc0006a8250 0xc0006a8268 0xc0006a8280] [0xc0006a8260 0xc0006a8278] [0x935700 0x935700] 0xc001662b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:50:48.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:50:48.938: INFO: rc: 1 Mar 8 14:50:48.938: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ab9b90 exit status 1 true [0xc0006a8288 0xc0006a82a0 0xc0006a82b8] [0xc0006a8288 0xc0006a82a0 0xc0006a82b8] [0xc0006a8298 0xc0006a82b0] [0x935700 0x935700] 0xc001662de0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:50:58.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:50:59.045: INFO: rc: 1 Mar 8 14:50:59.045: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001527d10 exit status 1 true [0xc00010e880 0xc00010e928 0xc00010e9b8] [0xc00010e880 0xc00010e928 0xc00010e9b8] [0xc00010e908 0xc00010e970] [0x935700 0x935700] 0xc001338480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:51:09.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:51:09.137: INFO: rc: 1 Mar 8 14:51:09.137: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00135b590 exit status 1 true [0xc000a2e008 0xc000a2e060 0xc000a2e098] [0xc000a2e008 0xc000a2e060 0xc000a2e098] [0xc000a2e040 0xc000a2e090] [0x935700 0x935700] 0xc0009ca1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:51:19.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:51:19.210: INFO: rc: 1 Mar 8 14:51:19.210: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ab9d70 exit status 1 true [0xc0006a82c0 0xc0006a82d8 0xc0006a82f0] [0xc0006a82c0 0xc0006a82d8 0xc0006a82f0] [0xc0006a82d0 0xc0006a82e8] [0x935700 0x935700] 0xc001663080 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:51:29.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:51:29.312: INFO: rc: 1 Mar 8 14:51:29.312: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000449890 exit status 1 true [0xc001232078 0xc001232090 0xc0012320a8] [0xc001232078 0xc001232090 0xc0012320a8] [0xc001232088 0xc0012320a0] [0x935700 0x935700] 0xc001254c00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:51:39.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:51:39.413: INFO: rc: 1 Mar 8 14:51:39.413: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001527ec0 exit status 1 true [0xc00010e9f0 0xc00010ea90 0xc00010eb20] [0xc00010e9f0 0xc00010ea90 0xc00010eb20] [0xc00010ea60 0xc00010eae8] [0x935700 0x935700] 0xc001338720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:51:49.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:51:49.529: INFO: rc: 1 Mar 8 14:51:49.529: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ab9ec0 exit status 1 true [0xc0006a82f8 0xc0006a8310 0xc0006a8328] [0xc0006a82f8 0xc0006a8310 0xc0006a8328] [0xc0006a8308 0xc0006a8320] [0x935700 0x935700] 0xc001663320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:51:59.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:51:59.634: INFO: rc: 1 Mar 8 14:51:59.634: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00135b6e0 exit status 1 true [0xc000a2e0c0 0xc000a2e100 0xc000a2e140] [0xc000a2e0c0 0xc000a2e100 0xc000a2e140] [0xc000a2e0e0 0xc000a2e138] [0x935700 0x935700] 0xc0009ca480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:52:09.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:52:09.739: INFO: rc: 1 Mar 8 14:52:09.739: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000afa0f0 exit status 1 true [0xc000a2e008 0xc000a2e060 0xc000a2e098] [0xc000a2e008 0xc000a2e060 0xc000a2e098] [0xc000a2e040 0xc000a2e090] [0x935700 0x935700] 0xc000ff21e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:52:19.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:52:19.851: INFO: rc: 1 Mar 8 14:52:19.851: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00135b5f0 exit status 1 true [0xc00010e078 0xc00010e0c8 0xc00010e180] [0xc00010e078 0xc00010e0c8 0xc00010e180] [0xc00010e0a0 0xc00010e160] [0x935700 0x935700] 0xc0009ca1e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:52:29.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:52:29.958: INFO: rc: 1 Mar 8 14:52:29.958: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00135b740 exit status 1 true [0xc00010e1c8 0xc00010e268 0xc00010e378] [0xc00010e1c8 0xc00010e268 0xc00010e378] [0xc00010e240 0xc00010e2d0] [0x935700 0x935700] 0xc0009ca480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:52:39.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:52:40.076: INFO: rc: 1 Mar 8 14:52:40.076: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001526180 exit status 1 true [0xc0006a8000 0xc0006a8018 0xc0006a8038] [0xc0006a8000 0xc0006a8018 0xc0006a8038] [0xc0006a8010 0xc0006a8030] [0x935700 0x935700] 0xc0013383c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:52:50.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:52:50.168: INFO: rc: 1 Mar 8 14:52:50.168: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ab8120 exit status 1 true [0xc001232000 0xc001232018 0xc001232030] [0xc001232000 0xc001232018 0xc001232030] [0xc001232010 0xc001232028] [0x935700 0x935700] 0xc0016621e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:53:00.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:53:00.282: INFO: rc: 1 Mar 8 14:53:00.282: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015262d0 exit status 1 true [0xc0006a8048 0xc0006a8068 0xc0006a8098] [0xc0006a8048 0xc0006a8068 0xc0006a8098] [0xc0006a8058 0xc0006a8088] [0x935700 0x935700] 0xc001338660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:53:10.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:53:10.386: INFO: rc: 1 Mar 8 14:53:10.386: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001526420 exit status 1 true [0xc0006a80a0 0xc0006a80d8 0xc0006a8100] [0xc0006a80a0 0xc0006a80d8 0xc0006a8100] [0xc0006a80b0 0xc0006a80f8] [0x935700 0x935700] 0xc001338900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:53:20.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:53:20.477: INFO: rc: 1 Mar 8 14:53:20.477: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ab82a0 exit status 1 true [0xc001232038 0xc001232050 0xc001232068] [0xc001232038 0xc001232050 0xc001232068] [0xc001232048 0xc001232060] [0x935700 0x935700] 0xc001662480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:53:30.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:53:30.573: INFO: rc: 1 Mar 8 14:53:30.573: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015265a0 exit status 1 true [0xc0006a8108 0xc0006a8180 0xc0006a8198] [0xc0006a8108 0xc0006a8180 0xc0006a8198] [0xc0006a8178 0xc0006a8190] [0x935700 0x935700] 0xc001338ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:53:40.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:53:40.664: INFO: rc: 1 Mar 8 14:53:40.664: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015266f0 exit status 1 true [0xc0006a81a0 0xc0006a81b8 0xc0006a81d8] [0xc0006a81a0 0xc0006a81b8 0xc0006a81d8] [0xc0006a81b0 0xc0006a81d0] [0x935700 0x935700] 0xc001338e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:53:50.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:53:50.788: INFO: rc: 1 Mar 8 14:53:50.788: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015268d0 exit status 1 true [0xc0006a8200 0xc0006a8218 0xc0006a8230] [0xc0006a8200 0xc0006a8218 0xc0006a8230] [0xc0006a8210 0xc0006a8228] [0x935700 0x935700] 0xc0013390e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:54:00.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:54:00.880: INFO: rc: 1 Mar 8 14:54:00.880: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001526a20 exit status 1 true [0xc0006a8238 0xc0006a8250 0xc0006a8268] [0xc0006a8238 0xc0006a8250 0xc0006a8268] [0xc0006a8248 0xc0006a8260] [0x935700 0x935700] 0xc001339380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:54:10.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:54:10.985: INFO: rc: 1 Mar 8 14:54:10.985: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00135b8f0 exit status 1 true [0xc00010e3d8 0xc00010e4b8 0xc00010e738] [0xc00010e3d8 0xc00010e4b8 0xc00010e738] [0xc00010e4a8 0xc00010e668] [0x935700 0x935700] 0xc0009ca720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:54:20.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:54:21.102: INFO: rc: 1 Mar 8 14:54:21.102: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000afa120 exit status 1 true [0xc00016e000 0xc000a2e040 0xc000a2e090] [0xc00016e000 0xc000a2e040 0xc000a2e090] [0xc000a2e038 0xc000a2e070] [0x935700 0x935700] 0xc000ff21e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:54:31.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:54:31.199: INFO: rc: 1 Mar 8 14:54:31.199: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000afa240 exit status 1 true [0xc000a2e098 0xc000a2e0e0 0xc000a2e138] [0xc000a2e098 0xc000a2e0e0 0xc000a2e138] [0xc000a2e0c8 0xc000a2e120] [0x935700 0x935700] 0xc000ff2ba0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:54:41.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:54:41.303: INFO: rc: 1 Mar 8 14:54:41.303: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000afa390 exit status 1 true [0xc000a2e140 0xc000a2e168 0xc000a2e188] [0xc000a2e140 0xc000a2e168 0xc000a2e188] [0xc000a2e158 0xc000a2e180] [0x935700 0x935700] 0xc000ff3740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:54:51.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:54:51.405: INFO: rc: 1 Mar 8 14:54:51.405: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000afa4e0 exit status 1 true [0xc000a2e190 0xc000a2e208 0xc000a2e300] [0xc000a2e190 0xc000a2e208 0xc000a2e300] [0xc000a2e1b0 0xc000a2e2f0] [0x935700 0x935700] 0xc0009ca060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:55:01.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:55:01.506: INFO: rc: 1 Mar 8 14:55:01.506: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000afa630 exit status 1 true [0xc000a2e338 0xc000a2e400 0xc000a2e578] [0xc000a2e338 0xc000a2e400 0xc000a2e578] [0xc000a2e3c0 0xc000a2e478] [0x935700 0x935700] 0xc0009ca300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:55:11.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:55:11.588: INFO: rc: 1 Mar 8 14:55:11.589: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0015261b0 exit status 1 true [0xc00010e078 0xc00010e0c8 0xc00010e180] [0xc00010e078 0xc00010e0c8 0xc00010e180] [0xc00010e0a0 0xc00010e160] [0x935700 0x935700] 0xc0013383c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 8 14:55:21.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-k8gqr ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 14:55:21.697: INFO: rc: 1 Mar 8 14:55:21.697: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Mar 8 14:55:21.697: INFO: Scaling statefulset ss to 0 Mar 8 14:55:21.708: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 8 14:55:21.710: INFO: Deleting all statefulset in ns e2e-tests-statefulset-k8gqr Mar 8 14:55:21.712: INFO: Scaling statefulset ss to 0 Mar 8 14:55:21.720: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 14:55:21.722: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 14:55:21.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-k8gqr" for this suite. Mar 8 14:55:27.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 14:55:27.774: INFO: namespace: e2e-tests-statefulset-k8gqr, resource: bindings, ignored listing per whitelist Mar 8 14:55:27.843: INFO: namespace e2e-tests-statefulset-k8gqr deletion completed in 6.104041764s • [SLOW TEST:371.894 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 14:55:27.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-d950fee4-614c-11ea-b38e-0242ac11000f STEP: Creating secret with name secret-projected-all-test-volume-d950fec1-614c-11ea-b38e-0242ac11000f STEP: Creating a pod to test Check all projections for projected volume plugin Mar 8 14:55:27.994: INFO: Waiting up to 5m0s for pod "projected-volume-d950fe6d-614c-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-q4npf" to be "success or failure" Mar 8 14:55:28.001: INFO: Pod "projected-volume-d950fe6d-614c-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.072634ms Mar 8 14:55:30.005: INFO: Pod "projected-volume-d950fe6d-614c-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010553831s Mar 8 14:55:32.009: INFO: Pod "projected-volume-d950fe6d-614c-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015022686s Mar 8 14:55:34.013: INFO: Pod "projected-volume-d950fe6d-614c-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018769248s STEP: Saw pod success Mar 8 14:55:34.013: INFO: Pod "projected-volume-d950fe6d-614c-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 14:55:34.015: INFO: Trying to get logs from node hunter-worker pod projected-volume-d950fe6d-614c-11ea-b38e-0242ac11000f container projected-all-volume-test: STEP: delete the pod Mar 8 14:55:34.044: INFO: Waiting for pod projected-volume-d950fe6d-614c-11ea-b38e-0242ac11000f to disappear Mar 8 14:55:34.076: INFO: Pod projected-volume-d950fe6d-614c-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 14:55:34.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-q4npf" for this suite. Mar 8 14:55:40.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 14:55:40.137: INFO: namespace: e2e-tests-projected-q4npf, resource: bindings, ignored listing per whitelist Mar 8 14:55:40.159: INFO: namespace e2e-tests-projected-q4npf deletion completed in 6.080240039s • [SLOW TEST:12.315 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 14:55:40.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 8 14:55:40.245: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 14:55:44.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-vg28q" for this suite. Mar 8 14:56:28.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 14:56:28.387: INFO: namespace: e2e-tests-pods-vg28q, resource: bindings, ignored listing per whitelist Mar 8 14:56:28.433: INFO: namespace e2e-tests-pods-vg28q deletion completed in 44.145958186s • [SLOW TEST:48.274 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 14:56:28.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0308 14:56:38.570580 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 14:56:38.570: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 14:56:38.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-xpqjr" for this suite. Mar 8 14:56:44.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 14:56:44.637: INFO: namespace: e2e-tests-gc-xpqjr, resource: bindings, ignored listing per whitelist Mar 8 14:56:44.663: INFO: namespace e2e-tests-gc-xpqjr deletion completed in 6.090071955s • [SLOW TEST:16.230 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 14:56:44.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 8 14:56:44.782: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 8 14:56:44.790: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 14:56:44.792: INFO: Number of nodes with available pods: 0 Mar 8 14:56:44.792: INFO: Node hunter-worker is running more than one daemon pod Mar 8 14:56:45.823: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 14:56:45.828: INFO: Number of nodes with available pods: 0 Mar 8 14:56:45.828: INFO: Node hunter-worker is running more than one daemon pod Mar 8 14:56:46.796: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 14:56:46.800: INFO: Number of nodes with available pods: 2 Mar 8 14:56:46.800: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 8 14:56:46.851: INFO: Wrong image for pod: daemon-set-lzlbl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 8 14:56:46.851: INFO: Wrong image for pod: daemon-set-qcklb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 8 14:56:46.858: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 14:56:47.862: INFO: Wrong image for pod: daemon-set-lzlbl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 8 14:56:47.862: INFO: Wrong image for pod: daemon-set-qcklb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 8 14:56:47.866: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 14:56:48.862: INFO: Wrong image for pod: daemon-set-lzlbl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 8 14:56:48.862: INFO: Wrong image for pod: daemon-set-qcklb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 8 14:56:48.865: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 14:56:49.862: INFO: Wrong image for pod: daemon-set-lzlbl. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 8 14:56:49.862: INFO: Pod daemon-set-lzlbl is not available Mar 8 14:56:49.862: INFO: Wrong image for pod: daemon-set-qcklb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 8 14:56:49.866: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 14:56:50.862: INFO: Pod daemon-set-dgc8x is not available Mar 8 14:56:50.862: INFO: Wrong image for pod: daemon-set-qcklb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 8 14:56:50.865: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 14:56:51.863: INFO: Pod daemon-set-dgc8x is not available Mar 8 14:56:51.863: INFO: Wrong image for pod: daemon-set-qcklb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 8 14:56:51.866: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 14:56:52.862: INFO: Pod daemon-set-dgc8x is not available Mar 8 14:56:52.862: INFO: Wrong image for pod: daemon-set-qcklb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 8 14:56:52.865: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 14:56:53.862: INFO: Pod daemon-set-dgc8x is not available Mar 8 14:56:53.862: INFO: Wrong image for pod: daemon-set-qcklb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 8 14:56:53.864: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 14:56:54.862: INFO: Wrong image for pod: daemon-set-qcklb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 8 14:56:54.866: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 14:56:55.863: INFO: Wrong image for pod: daemon-set-qcklb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Mar 8 14:56:55.863: INFO: Pod daemon-set-qcklb is not available Mar 8 14:56:55.867: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 14:56:56.862: INFO: Pod daemon-set-tpvh7 is not available Mar 8 14:56:56.865: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 8 14:56:56.870: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 14:56:56.872: INFO: Number of nodes with available pods: 1 Mar 8 14:56:56.872: INFO: Node hunter-worker2 is running more than one daemon pod Mar 8 14:56:57.877: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 14:56:57.880: INFO: Number of nodes with available pods: 1 Mar 8 14:56:57.880: INFO: Node hunter-worker2 is running more than one daemon pod Mar 8 14:56:58.877: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 14:56:58.880: INFO: Number of nodes with available pods: 2 Mar 8 14:56:58.880: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-kr57q, will wait for the garbage collector to delete the pods Mar 8 14:56:58.954: INFO: Deleting DaemonSet.extensions daemon-set took: 6.766064ms Mar 8 14:56:59.054: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.262784ms Mar 8 14:57:07.966: INFO: Number of nodes with available pods: 0 Mar 8 14:57:07.966: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 14:57:07.969: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-kr57q/daemonsets","resourceVersion":"3163"},"items":null} Mar 8 14:57:07.971: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-kr57q/pods","resourceVersion":"3163"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 14:57:07.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-kr57q" for this suite. Mar 8 14:57:13.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 14:57:14.024: INFO: namespace: e2e-tests-daemonsets-kr57q, resource: bindings, ignored listing per whitelist Mar 8 14:57:14.067: INFO: namespace e2e-tests-daemonsets-kr57q deletion completed in 6.085182272s • [SLOW TEST:29.404 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 14:57:14.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 8 14:57:19.002: INFO: Successfully updated pod "labelsupdate189c81f9-614d-11ea-b38e-0242ac11000f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 14:57:21.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7fct9" for this suite. Mar 8 14:57:39.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 14:57:39.122: INFO: namespace: e2e-tests-projected-7fct9, resource: bindings, ignored listing per whitelist Mar 8 14:57:39.139: INFO: namespace e2e-tests-projected-7fct9 deletion completed in 18.086759047s • [SLOW TEST:25.072 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 14:57:39.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-27932115-614d-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume secrets Mar 8 14:57:39.266: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-27939c84-614d-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-4zcxj" to be "success or failure" Mar 8 14:57:39.270: INFO: Pod "pod-projected-secrets-27939c84-614d-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.257477ms Mar 8 14:57:41.274: INFO: Pod "pod-projected-secrets-27939c84-614d-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008271732s Mar 8 14:57:43.278: INFO: Pod "pod-projected-secrets-27939c84-614d-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011946791s STEP: Saw pod success Mar 8 14:57:43.278: INFO: Pod "pod-projected-secrets-27939c84-614d-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 14:57:43.280: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-27939c84-614d-11ea-b38e-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Mar 8 14:57:43.326: INFO: Waiting for pod pod-projected-secrets-27939c84-614d-11ea-b38e-0242ac11000f to disappear Mar 8 14:57:43.350: INFO: Pod pod-projected-secrets-27939c84-614d-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 14:57:43.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-4zcxj" for this suite. Mar 8 14:57:49.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 14:57:49.371: INFO: namespace: e2e-tests-projected-4zcxj, resource: bindings, ignored listing per whitelist Mar 8 14:57:49.444: INFO: namespace e2e-tests-projected-4zcxj deletion completed in 6.09065082s • [SLOW TEST:10.305 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 14:57:49.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Mar 8 14:57:49.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tsqx9' Mar 8 14:57:51.313: INFO: stderr: "" Mar 8 14:57:51.313: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 8 14:57:52.317: INFO: Selector matched 1 pods for map[app:redis] Mar 8 14:57:52.317: INFO: Found 0 / 1 Mar 8 14:57:53.316: INFO: Selector matched 1 pods for map[app:redis] Mar 8 14:57:53.317: INFO: Found 1 / 1 Mar 8 14:57:53.317: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 8 14:57:53.319: INFO: Selector matched 1 pods for map[app:redis] Mar 8 14:57:53.319: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 8 14:57:53.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-rszvl --namespace=e2e-tests-kubectl-tsqx9 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 8 14:57:53.421: INFO: stderr: "" Mar 8 14:57:53.421: INFO: stdout: "pod/redis-master-rszvl patched\n" STEP: checking annotations Mar 8 14:57:53.423: INFO: Selector matched 1 pods for map[app:redis] Mar 8 14:57:53.423: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 14:57:53.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-tsqx9" for this suite. Mar 8 14:58:15.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 14:58:15.524: INFO: namespace: e2e-tests-kubectl-tsqx9, resource: bindings, ignored listing per whitelist Mar 8 14:58:15.532: INFO: namespace e2e-tests-kubectl-tsqx9 deletion completed in 22.106355778s • [SLOW TEST:26.088 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 14:58:15.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 8 14:58:15.629: INFO: Waiting up to 5m0s for pod "pod-3d3efa23-614d-11ea-b38e-0242ac11000f" in namespace "e2e-tests-emptydir-hp67t" to be "success or failure" Mar 8 14:58:15.637: INFO: Pod "pod-3d3efa23-614d-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.836564ms Mar 8 14:58:17.641: INFO: Pod "pod-3d3efa23-614d-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011927537s Mar 8 14:58:19.645: INFO: Pod "pod-3d3efa23-614d-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015955633s STEP: Saw pod success Mar 8 14:58:19.645: INFO: Pod "pod-3d3efa23-614d-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 14:58:19.648: INFO: Trying to get logs from node hunter-worker pod pod-3d3efa23-614d-11ea-b38e-0242ac11000f container test-container: STEP: delete the pod Mar 8 14:58:19.661: INFO: Waiting for pod pod-3d3efa23-614d-11ea-b38e-0242ac11000f to disappear Mar 8 14:58:19.672: INFO: Pod pod-3d3efa23-614d-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 14:58:19.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-hp67t" for this suite. Mar 8 14:58:25.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 14:58:25.718: INFO: namespace: e2e-tests-emptydir-hp67t, resource: bindings, ignored listing per whitelist Mar 8 14:58:25.771: INFO: namespace e2e-tests-emptydir-hp67t deletion completed in 6.096715748s • [SLOW TEST:10.239 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 14:58:25.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 8 14:58:25.866: INFO: Waiting up to 5m0s for pod "pod-435a9cea-614d-11ea-b38e-0242ac11000f" in namespace "e2e-tests-emptydir-rsgqr" to be "success or failure" Mar 8 14:58:25.870: INFO: Pod "pod-435a9cea-614d-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021009ms Mar 8 14:58:27.875: INFO: Pod "pod-435a9cea-614d-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008446879s STEP: Saw pod success Mar 8 14:58:27.875: INFO: Pod "pod-435a9cea-614d-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 14:58:27.878: INFO: Trying to get logs from node hunter-worker pod pod-435a9cea-614d-11ea-b38e-0242ac11000f container test-container: STEP: delete the pod Mar 8 14:58:27.927: INFO: Waiting for pod pod-435a9cea-614d-11ea-b38e-0242ac11000f to disappear Mar 8 14:58:27.930: INFO: Pod pod-435a9cea-614d-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 14:58:27.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-rsgqr" for this suite. Mar 8 14:58:33.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 14:58:33.958: INFO: namespace: e2e-tests-emptydir-rsgqr, resource: bindings, ignored listing per whitelist Mar 8 14:58:34.063: INFO: namespace e2e-tests-emptydir-rsgqr deletion completed in 6.128869621s • [SLOW TEST:8.291 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 14:58:34.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 8 14:58:34.160: INFO: Creating ReplicaSet my-hostname-basic-484d06af-614d-11ea-b38e-0242ac11000f Mar 8 14:58:34.170: INFO: Pod name my-hostname-basic-484d06af-614d-11ea-b38e-0242ac11000f: Found 0 pods out of 1 Mar 8 14:58:39.175: INFO: Pod name my-hostname-basic-484d06af-614d-11ea-b38e-0242ac11000f: Found 1 pods out of 1 Mar 8 14:58:39.175: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-484d06af-614d-11ea-b38e-0242ac11000f" is running Mar 8 14:58:39.177: INFO: Pod "my-hostname-basic-484d06af-614d-11ea-b38e-0242ac11000f-7srcl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 14:58:34 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 14:58:38 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 14:58:38 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 14:58:34 +0000 UTC Reason: Message:}]) Mar 8 14:58:39.177: INFO: Trying to dial the pod Mar 8 14:58:44.196: INFO: Controller my-hostname-basic-484d06af-614d-11ea-b38e-0242ac11000f: Got expected result from replica 1 [my-hostname-basic-484d06af-614d-11ea-b38e-0242ac11000f-7srcl]: "my-hostname-basic-484d06af-614d-11ea-b38e-0242ac11000f-7srcl", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 14:58:44.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-9rnhq" for this suite. Mar 8 14:58:50.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 14:58:50.299: INFO: namespace: e2e-tests-replicaset-9rnhq, resource: bindings, ignored listing per whitelist Mar 8 14:58:50.334: INFO: namespace e2e-tests-replicaset-9rnhq deletion completed in 6.126874583s • [SLOW TEST:16.271 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 14:58:50.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-r2zqd in namespace e2e-tests-proxy-kmj2f I0308 14:58:50.453264 6 runners.go:184] Created replication controller with name: proxy-service-r2zqd, namespace: e2e-tests-proxy-kmj2f, replica count: 1 I0308 14:58:51.503695 6 runners.go:184] proxy-service-r2zqd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0308 14:58:52.503914 6 runners.go:184] proxy-service-r2zqd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0308 14:58:53.504134 6 runners.go:184] proxy-service-r2zqd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0308 14:58:54.504426 6 runners.go:184] proxy-service-r2zqd Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0308 14:58:55.504646 6 runners.go:184] proxy-service-r2zqd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 14:58:56.504899 6 runners.go:184] proxy-service-r2zqd Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0308 14:58:57.505206 6 runners.go:184] proxy-service-r2zqd Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 14:58:57.508: INFO: setup took 7.096495187s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 8 14:58:57.516: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-kmj2f/pods/http:proxy-service-r2zqd-jtkqw:1080/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 14:59:22.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-f5wjd" for this suite. Mar 8 14:59:28.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 14:59:28.253: INFO: namespace: e2e-tests-kubelet-test-f5wjd, resource: bindings, ignored listing per whitelist Mar 8 14:59:28.274: INFO: namespace e2e-tests-kubelet-test-f5wjd deletion completed in 6.064153428s • [SLOW TEST:14.218 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 14:59:28.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 14:59:30.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-b56hb" for this suite. Mar 8 15:00:20.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:00:20.459: INFO: namespace: e2e-tests-kubelet-test-b56hb, resource: bindings, ignored listing per whitelist Mar 8 15:00:20.471: INFO: namespace e2e-tests-kubelet-test-b56hb deletion completed in 50.09378663s • [SLOW TEST:52.197 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 should not write to root filesystem [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:00:20.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 8 15:00:20.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 8 15:00:20.681: INFO: stderr: "" Mar 8 15:00:20.681: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T01:07:14Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:00:20.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ghzrc" for this suite. Mar 8 15:00:27.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:00:27.115: INFO: namespace: e2e-tests-kubectl-ghzrc, resource: bindings, ignored listing per whitelist Mar 8 15:00:27.169: INFO: namespace e2e-tests-kubectl-ghzrc deletion completed in 6.48361617s • [SLOW TEST:6.698 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:00:27.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating pod Mar 8 15:00:29.360: INFO: Pod pod-hostip-8bbe2955-614d-11ea-b38e-0242ac11000f has hostIP: 172.17.0.11 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:00:29.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-v7hrr" for this suite. Mar 8 15:00:51.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:00:51.440: INFO: namespace: e2e-tests-pods-v7hrr, resource: bindings, ignored listing per whitelist Mar 8 15:00:51.453: INFO: namespace e2e-tests-pods-v7hrr deletion completed in 22.08996464s • [SLOW TEST:24.284 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:00:51.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:01:00.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-6llwk" for this suite. Mar 8 15:01:22.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:01:22.812: INFO: namespace: e2e-tests-replication-controller-6llwk, resource: bindings, ignored listing per whitelist Mar 8 15:01:22.817: INFO: namespace e2e-tests-replication-controller-6llwk deletion completed in 22.163808949s • [SLOW TEST:31.363 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:01:22.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-acdf7e46-614d-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 8 15:01:22.905: INFO: Waiting up to 5m0s for pod "pod-configmaps-ace045e0-614d-11ea-b38e-0242ac11000f" in namespace "e2e-tests-configmap-xz72k" to be "success or failure" Mar 8 15:01:22.909: INFO: Pod "pod-configmaps-ace045e0-614d-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208048ms Mar 8 15:01:24.926: INFO: Pod "pod-configmaps-ace045e0-614d-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021350745s STEP: Saw pod success Mar 8 15:01:24.926: INFO: Pod "pod-configmaps-ace045e0-614d-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:01:24.929: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-ace045e0-614d-11ea-b38e-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 8 15:01:24.947: INFO: Waiting for pod pod-configmaps-ace045e0-614d-11ea-b38e-0242ac11000f to disappear Mar 8 15:01:24.951: INFO: Pod pod-configmaps-ace045e0-614d-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:01:24.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-xz72k" for this suite. Mar 8 15:01:30.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:01:31.039: INFO: namespace: e2e-tests-configmap-xz72k, resource: bindings, ignored listing per whitelist Mar 8 15:01:31.054: INFO: namespace e2e-tests-configmap-xz72k deletion completed in 6.099586896s • [SLOW TEST:8.237 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:01:31.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-p7gz STEP: Creating a pod to test atomic-volume-subpath Mar 8 15:01:32.084: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-p7gz" in namespace "e2e-tests-subpath-jknhd" to be "success or failure" Mar 8 15:01:32.087: INFO: Pod "pod-subpath-test-configmap-p7gz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.887467ms Mar 8 15:01:34.121: INFO: Pod "pod-subpath-test-configmap-p7gz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036340473s Mar 8 15:01:36.125: INFO: Pod "pod-subpath-test-configmap-p7gz": Phase="Running", Reason="", readiness=false. Elapsed: 4.040531835s Mar 8 15:01:38.129: INFO: Pod "pod-subpath-test-configmap-p7gz": Phase="Running", Reason="", readiness=false. Elapsed: 6.044670623s Mar 8 15:01:40.132: INFO: Pod "pod-subpath-test-configmap-p7gz": Phase="Running", Reason="", readiness=false. Elapsed: 8.047944075s Mar 8 15:01:42.136: INFO: Pod "pod-subpath-test-configmap-p7gz": Phase="Running", Reason="", readiness=false. Elapsed: 10.051666135s Mar 8 15:01:44.156: INFO: Pod "pod-subpath-test-configmap-p7gz": Phase="Running", Reason="", readiness=false. Elapsed: 12.071375617s Mar 8 15:01:46.160: INFO: Pod "pod-subpath-test-configmap-p7gz": Phase="Running", Reason="", readiness=false. Elapsed: 14.075242385s Mar 8 15:01:48.164: INFO: Pod "pod-subpath-test-configmap-p7gz": Phase="Running", Reason="", readiness=false. Elapsed: 16.079237797s Mar 8 15:01:50.166: INFO: Pod "pod-subpath-test-configmap-p7gz": Phase="Running", Reason="", readiness=false. Elapsed: 18.081894018s Mar 8 15:01:52.174: INFO: Pod "pod-subpath-test-configmap-p7gz": Phase="Running", Reason="", readiness=false. Elapsed: 20.089209392s Mar 8 15:01:54.177: INFO: Pod "pod-subpath-test-configmap-p7gz": Phase="Running", Reason="", readiness=false. Elapsed: 22.092910262s Mar 8 15:01:56.182: INFO: Pod "pod-subpath-test-configmap-p7gz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.097429324s STEP: Saw pod success Mar 8 15:01:56.182: INFO: Pod "pod-subpath-test-configmap-p7gz" satisfied condition "success or failure" Mar 8 15:01:56.184: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-p7gz container test-container-subpath-configmap-p7gz: STEP: delete the pod Mar 8 15:01:56.205: INFO: Waiting for pod pod-subpath-test-configmap-p7gz to disappear Mar 8 15:01:56.209: INFO: Pod pod-subpath-test-configmap-p7gz no longer exists STEP: Deleting pod pod-subpath-test-configmap-p7gz Mar 8 15:01:56.209: INFO: Deleting pod "pod-subpath-test-configmap-p7gz" in namespace "e2e-tests-subpath-jknhd" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:01:56.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-jknhd" for this suite. Mar 8 15:02:02.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:02:02.380: INFO: namespace: e2e-tests-subpath-jknhd, resource: bindings, ignored listing per whitelist Mar 8 15:02:02.455: INFO: namespace e2e-tests-subpath-jknhd deletion completed in 6.239743506s • [SLOW TEST:31.400 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:02:02.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Mar 8 15:02:06.625: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:02:30.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-vc4sr" for this suite. Mar 8 15:02:36.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:02:36.774: INFO: namespace: e2e-tests-namespaces-vc4sr, resource: bindings, ignored listing per whitelist Mar 8 15:02:36.800: INFO: namespace e2e-tests-namespaces-vc4sr deletion completed in 6.089723004s STEP: Destroying namespace "e2e-tests-nsdeletetest-nbd88" for this suite. Mar 8 15:02:36.802: INFO: Namespace e2e-tests-nsdeletetest-nbd88 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-v4sgd" for this suite. Mar 8 15:02:42.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:02:42.855: INFO: namespace: e2e-tests-nsdeletetest-v4sgd, resource: bindings, ignored listing per whitelist Mar 8 15:02:42.891: INFO: namespace e2e-tests-nsdeletetest-v4sgd deletion completed in 6.088523294s • [SLOW TEST:40.435 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:02:42.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-zvhwg I0308 15:02:43.024200 6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-zvhwg, replica count: 1 I0308 15:02:44.074595 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0308 15:02:45.074843 6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 8 15:02:45.221: INFO: Created: latency-svc-xfsml Mar 8 15:02:45.259: INFO: Got endpoints: latency-svc-xfsml [84.287104ms] Mar 8 15:02:45.308: INFO: Created: latency-svc-nvmkg Mar 8 15:02:45.319: INFO: Got endpoints: latency-svc-nvmkg [59.67852ms] Mar 8 15:02:45.348: INFO: Created: latency-svc-gt9nv Mar 8 15:02:45.378: INFO: Got endpoints: latency-svc-gt9nv [119.21325ms] Mar 8 15:02:45.390: INFO: Created: latency-svc-xldh4 Mar 8 15:02:45.397: INFO: Got endpoints: latency-svc-xldh4 [137.945826ms] Mar 8 15:02:45.425: INFO: Created: latency-svc-4vq8m Mar 8 15:02:45.434: INFO: Got endpoints: latency-svc-4vq8m [174.096443ms] Mar 8 15:02:45.463: INFO: Created: latency-svc-kgxlt Mar 8 15:02:45.469: INFO: Got endpoints: latency-svc-kgxlt [209.742481ms] Mar 8 15:02:45.528: INFO: Created: latency-svc-2jc49 Mar 8 15:02:45.531: INFO: Got endpoints: latency-svc-2jc49 [271.255311ms] Mar 8 15:02:45.593: INFO: Created: latency-svc-vgk6v Mar 8 15:02:45.602: INFO: Got endpoints: latency-svc-vgk6v [342.687587ms] Mar 8 15:02:45.623: INFO: Created: latency-svc-xl8fw Mar 8 15:02:45.659: INFO: Got endpoints: latency-svc-xl8fw [400.048513ms] Mar 8 15:02:45.692: INFO: Created: latency-svc-nkhqf Mar 8 15:02:45.721: INFO: Got endpoints: latency-svc-nkhqf [462.012961ms] Mar 8 15:02:45.746: INFO: Created: latency-svc-jdmb2 Mar 8 15:02:45.752: INFO: Got endpoints: latency-svc-jdmb2 [492.665101ms] Mar 8 15:02:45.833: INFO: Created: latency-svc-f9488 Mar 8 15:02:45.837: INFO: Got endpoints: latency-svc-f9488 [577.497161ms] Mar 8 15:02:45.891: INFO: Created: latency-svc-zrrsx Mar 8 15:02:45.897: INFO: Got endpoints: latency-svc-zrrsx [637.994788ms] Mar 8 15:02:45.930: INFO: Created: latency-svc-88m9d Mar 8 15:02:46.007: INFO: Got endpoints: latency-svc-88m9d [747.403653ms] Mar 8 15:02:46.011: INFO: Created: latency-svc-4gl29 Mar 8 15:02:46.023: INFO: Got endpoints: latency-svc-4gl29 [763.599645ms] Mar 8 15:02:46.081: INFO: Created: latency-svc-pvq7v Mar 8 15:02:46.095: INFO: Got endpoints: latency-svc-pvq7v [835.6599ms] Mar 8 15:02:46.139: INFO: Created: latency-svc-t2bsl Mar 8 15:02:46.150: INFO: Got endpoints: latency-svc-t2bsl [830.983912ms] Mar 8 15:02:46.177: INFO: Created: latency-svc-s8cmx Mar 8 15:02:46.198: INFO: Got endpoints: latency-svc-s8cmx [820.150788ms] Mar 8 15:02:46.300: INFO: Created: latency-svc-bvqlg Mar 8 15:02:46.303: INFO: Got endpoints: latency-svc-bvqlg [905.374118ms] Mar 8 15:02:46.387: INFO: Created: latency-svc-2zrqt Mar 8 15:02:46.504: INFO: Got endpoints: latency-svc-2zrqt [1.070158841s] Mar 8 15:02:46.506: INFO: Created: latency-svc-lwd5g Mar 8 15:02:46.523: INFO: Got endpoints: latency-svc-lwd5g [1.053994737s] Mar 8 15:02:46.570: INFO: Created: latency-svc-59tmr Mar 8 15:02:46.600: INFO: Got endpoints: latency-svc-59tmr [1.069417541s] Mar 8 15:02:46.673: INFO: Created: latency-svc-fs642 Mar 8 15:02:46.675: INFO: Got endpoints: latency-svc-fs642 [1.073354919s] Mar 8 15:02:46.719: INFO: Created: latency-svc-w8bwm Mar 8 15:02:46.733: INFO: Got endpoints: latency-svc-w8bwm [1.073863853s] Mar 8 15:02:46.828: INFO: Created: latency-svc-lb6bk Mar 8 15:02:46.831: INFO: Got endpoints: latency-svc-lb6bk [1.109122391s] Mar 8 15:02:46.886: INFO: Created: latency-svc-w5h49 Mar 8 15:02:47.037: INFO: Got endpoints: latency-svc-w5h49 [1.28515956s] Mar 8 15:02:47.220: INFO: Created: latency-svc-w7lbc Mar 8 15:02:47.279: INFO: Created: latency-svc-mrlfl Mar 8 15:02:47.280: INFO: Got endpoints: latency-svc-w7lbc [1.442677364s] Mar 8 15:02:47.286: INFO: Got endpoints: latency-svc-mrlfl [1.388426421s] Mar 8 15:02:47.308: INFO: Created: latency-svc-wk8xn Mar 8 15:02:47.316: INFO: Got endpoints: latency-svc-wk8xn [1.308963343s] Mar 8 15:02:47.360: INFO: Created: latency-svc-ckzck Mar 8 15:02:47.394: INFO: Got endpoints: latency-svc-ckzck [1.370511165s] Mar 8 15:02:47.394: INFO: Created: latency-svc-dg5jc Mar 8 15:02:47.400: INFO: Got endpoints: latency-svc-dg5jc [1.305107359s] Mar 8 15:02:47.434: INFO: Created: latency-svc-vprm6 Mar 8 15:02:47.443: INFO: Got endpoints: latency-svc-vprm6 [1.292650801s] Mar 8 15:02:47.565: INFO: Created: latency-svc-5tfp5 Mar 8 15:02:47.653: INFO: Got endpoints: latency-svc-5tfp5 [1.454130136s] Mar 8 15:02:47.653: INFO: Created: latency-svc-82x4j Mar 8 15:02:47.763: INFO: Got endpoints: latency-svc-82x4j [1.459713711s] Mar 8 15:02:47.778: INFO: Created: latency-svc-ls59m Mar 8 15:02:47.791: INFO: Got endpoints: latency-svc-ls59m [1.287497086s] Mar 8 15:02:47.837: INFO: Created: latency-svc-jhpxm Mar 8 15:02:47.845: INFO: Got endpoints: latency-svc-jhpxm [1.3221697s] Mar 8 15:02:47.899: INFO: Created: latency-svc-w4ptw Mar 8 15:02:47.902: INFO: Got endpoints: latency-svc-w4ptw [1.302297125s] Mar 8 15:02:47.925: INFO: Created: latency-svc-q2sm5 Mar 8 15:02:47.930: INFO: Got endpoints: latency-svc-q2sm5 [1.254698409s] Mar 8 15:02:47.959: INFO: Created: latency-svc-mv6jc Mar 8 15:02:47.983: INFO: Got endpoints: latency-svc-mv6jc [1.249499376s] Mar 8 15:02:48.031: INFO: Created: latency-svc-9v6qq Mar 8 15:02:48.052: INFO: Got endpoints: latency-svc-9v6qq [1.221858016s] Mar 8 15:02:48.078: INFO: Created: latency-svc-6sckb Mar 8 15:02:48.120: INFO: Got endpoints: latency-svc-6sckb [1.082916613s] Mar 8 15:02:48.193: INFO: Created: latency-svc-ftzn8 Mar 8 15:02:48.199: INFO: Got endpoints: latency-svc-ftzn8 [918.951961ms] Mar 8 15:02:48.239: INFO: Created: latency-svc-8zb92 Mar 8 15:02:48.250: INFO: Got endpoints: latency-svc-8zb92 [964.806263ms] Mar 8 15:02:48.277: INFO: Created: latency-svc-xll8d Mar 8 15:02:48.286: INFO: Got endpoints: latency-svc-xll8d [969.606157ms] Mar 8 15:02:48.366: INFO: Created: latency-svc-vxttb Mar 8 15:02:48.369: INFO: Got endpoints: latency-svc-vxttb [975.603483ms] Mar 8 15:02:48.407: INFO: Created: latency-svc-dcmmx Mar 8 15:02:48.412: INFO: Got endpoints: latency-svc-dcmmx [1.011282126s] Mar 8 15:02:48.433: INFO: Created: latency-svc-hl769 Mar 8 15:02:48.436: INFO: Got endpoints: latency-svc-hl769 [993.633094ms] Mar 8 15:02:48.455: INFO: Created: latency-svc-zvvhw Mar 8 15:02:48.462: INFO: Got endpoints: latency-svc-zvvhw [808.782289ms] Mar 8 15:02:48.546: INFO: Created: latency-svc-d99q2 Mar 8 15:02:48.549: INFO: Got endpoints: latency-svc-d99q2 [786.725555ms] Mar 8 15:02:48.595: INFO: Created: latency-svc-44qx9 Mar 8 15:02:48.611: INFO: Got endpoints: latency-svc-44qx9 [819.888134ms] Mar 8 15:02:48.636: INFO: Created: latency-svc-lk8pv Mar 8 15:02:48.701: INFO: Got endpoints: latency-svc-lk8pv [855.982806ms] Mar 8 15:02:48.704: INFO: Created: latency-svc-c9hdf Mar 8 15:02:48.733: INFO: Got endpoints: latency-svc-c9hdf [830.265774ms] Mar 8 15:02:48.788: INFO: Created: latency-svc-xc4cj Mar 8 15:02:48.797: INFO: Got endpoints: latency-svc-xc4cj [866.958301ms] Mar 8 15:02:48.869: INFO: Created: latency-svc-k7g76 Mar 8 15:02:48.877: INFO: Got endpoints: latency-svc-k7g76 [893.68209ms] Mar 8 15:02:48.905: INFO: Created: latency-svc-jvld5 Mar 8 15:02:48.908: INFO: Got endpoints: latency-svc-jvld5 [855.665966ms] Mar 8 15:02:49.055: INFO: Created: latency-svc-7rpph Mar 8 15:02:49.059: INFO: Got endpoints: latency-svc-7rpph [938.716628ms] Mar 8 15:02:49.247: INFO: Created: latency-svc-hkdwv Mar 8 15:02:49.252: INFO: Got endpoints: latency-svc-hkdwv [1.052897281s] Mar 8 15:02:49.301: INFO: Created: latency-svc-b888q Mar 8 15:02:49.316: INFO: Got endpoints: latency-svc-b888q [1.065195899s] Mar 8 15:02:49.439: INFO: Created: latency-svc-2cwhs Mar 8 15:02:49.443: INFO: Got endpoints: latency-svc-2cwhs [1.157399974s] Mar 8 15:02:49.518: INFO: Created: latency-svc-hmzlj Mar 8 15:02:49.531: INFO: Got endpoints: latency-svc-hmzlj [1.162157853s] Mar 8 15:02:49.600: INFO: Created: latency-svc-twbql Mar 8 15:02:49.602: INFO: Got endpoints: latency-svc-twbql [1.190508595s] Mar 8 15:02:49.664: INFO: Created: latency-svc-th24w Mar 8 15:02:49.681: INFO: Got endpoints: latency-svc-th24w [1.24504969s] Mar 8 15:02:49.749: INFO: Created: latency-svc-q64kk Mar 8 15:02:49.752: INFO: Got endpoints: latency-svc-q64kk [1.290275824s] Mar 8 15:02:49.800: INFO: Created: latency-svc-pp8dd Mar 8 15:02:49.843: INFO: Got endpoints: latency-svc-pp8dd [1.293770781s] Mar 8 15:02:49.911: INFO: Created: latency-svc-ff64l Mar 8 15:02:49.916: INFO: Got endpoints: latency-svc-ff64l [1.304252837s] Mar 8 15:02:49.946: INFO: Created: latency-svc-49blm Mar 8 15:02:49.964: INFO: Got endpoints: latency-svc-49blm [1.262467354s] Mar 8 15:02:50.001: INFO: Created: latency-svc-smgt2 Mar 8 15:02:50.073: INFO: Got endpoints: latency-svc-smgt2 [1.339808892s] Mar 8 15:02:50.075: INFO: Created: latency-svc-fd5cx Mar 8 15:02:50.084: INFO: Got endpoints: latency-svc-fd5cx [1.286836846s] Mar 8 15:02:50.169: INFO: Created: latency-svc-m4npp Mar 8 15:02:50.216: INFO: Got endpoints: latency-svc-m4npp [1.339454166s] Mar 8 15:02:50.218: INFO: Created: latency-svc-dn9ph Mar 8 15:02:50.223: INFO: Got endpoints: latency-svc-dn9ph [1.315060178s] Mar 8 15:02:50.253: INFO: Created: latency-svc-95std Mar 8 15:02:50.277: INFO: Got endpoints: latency-svc-95std [1.217435015s] Mar 8 15:02:50.366: INFO: Created: latency-svc-p9smk Mar 8 15:02:50.383: INFO: Got endpoints: latency-svc-p9smk [1.131100215s] Mar 8 15:02:50.419: INFO: Created: latency-svc-54ntr Mar 8 15:02:50.434: INFO: Got endpoints: latency-svc-54ntr [1.118412938s] Mar 8 15:02:50.462: INFO: Created: latency-svc-tckvn Mar 8 15:02:50.516: INFO: Got endpoints: latency-svc-tckvn [1.073280908s] Mar 8 15:02:50.551: INFO: Created: latency-svc-vx96q Mar 8 15:02:50.554: INFO: Got endpoints: latency-svc-vx96q [1.022705007s] Mar 8 15:02:50.579: INFO: Created: latency-svc-bvbc2 Mar 8 15:02:50.584: INFO: Got endpoints: latency-svc-bvbc2 [981.968472ms] Mar 8 15:02:50.602: INFO: Created: latency-svc-ldlv9 Mar 8 15:02:50.608: INFO: Got endpoints: latency-svc-ldlv9 [926.946543ms] Mar 8 15:02:50.690: INFO: Created: latency-svc-x2scv Mar 8 15:02:50.715: INFO: Got endpoints: latency-svc-x2scv [962.611981ms] Mar 8 15:02:50.715: INFO: Created: latency-svc-2fjb8 Mar 8 15:02:50.719: INFO: Got endpoints: latency-svc-2fjb8 [875.303479ms] Mar 8 15:02:50.743: INFO: Created: latency-svc-7zb42 Mar 8 15:02:50.747: INFO: Got endpoints: latency-svc-7zb42 [831.615272ms] Mar 8 15:02:50.770: INFO: Created: latency-svc-mc2sd Mar 8 15:02:50.777: INFO: Got endpoints: latency-svc-mc2sd [813.444291ms] Mar 8 15:02:50.846: INFO: Created: latency-svc-659gh Mar 8 15:02:50.849: INFO: Got endpoints: latency-svc-659gh [776.263944ms] Mar 8 15:02:50.881: INFO: Created: latency-svc-2vhhg Mar 8 15:02:50.886: INFO: Got endpoints: latency-svc-2vhhg [801.99954ms] Mar 8 15:02:50.907: INFO: Created: latency-svc-bw2dw Mar 8 15:02:50.921: INFO: Got endpoints: latency-svc-bw2dw [704.423911ms] Mar 8 15:02:50.943: INFO: Created: latency-svc-m9sxd Mar 8 15:02:51.013: INFO: Got endpoints: latency-svc-m9sxd [789.330471ms] Mar 8 15:02:51.015: INFO: Created: latency-svc-plzt8 Mar 8 15:02:51.019: INFO: Got endpoints: latency-svc-plzt8 [742.543508ms] Mar 8 15:02:51.061: INFO: Created: latency-svc-vbg67 Mar 8 15:02:51.067: INFO: Got endpoints: latency-svc-vbg67 [684.020083ms] Mar 8 15:02:51.085: INFO: Created: latency-svc-9fm4d Mar 8 15:02:51.091: INFO: Got endpoints: latency-svc-9fm4d [657.191743ms] Mar 8 15:02:51.110: INFO: Created: latency-svc-l2vss Mar 8 15:02:51.168: INFO: Got endpoints: latency-svc-l2vss [651.694185ms] Mar 8 15:02:51.172: INFO: Created: latency-svc-8gl7t Mar 8 15:02:51.176: INFO: Got endpoints: latency-svc-8gl7t [621.260774ms] Mar 8 15:02:51.208: INFO: Created: latency-svc-z2nrz Mar 8 15:02:51.212: INFO: Got endpoints: latency-svc-z2nrz [627.815324ms] Mar 8 15:02:51.232: INFO: Created: latency-svc-8jht9 Mar 8 15:02:51.250: INFO: Got endpoints: latency-svc-8jht9 [641.52547ms] Mar 8 15:02:51.331: INFO: Created: latency-svc-fkch9 Mar 8 15:02:51.333: INFO: Got endpoints: latency-svc-fkch9 [618.660614ms] Mar 8 15:02:51.409: INFO: Created: latency-svc-8bnnm Mar 8 15:02:51.418: INFO: Got endpoints: latency-svc-8bnnm [698.83588ms] Mar 8 15:02:51.504: INFO: Created: latency-svc-g6rj7 Mar 8 15:02:51.513: INFO: Got endpoints: latency-svc-g6rj7 [765.896492ms] Mar 8 15:02:51.534: INFO: Created: latency-svc-9pdq8 Mar 8 15:02:51.537: INFO: Got endpoints: latency-svc-9pdq8 [759.682578ms] Mar 8 15:02:51.591: INFO: Created: latency-svc-7gxxs Mar 8 15:02:51.598: INFO: Got endpoints: latency-svc-7gxxs [748.597753ms] Mar 8 15:02:51.660: INFO: Created: latency-svc-75qff Mar 8 15:02:51.662: INFO: Got endpoints: latency-svc-75qff [775.694899ms] Mar 8 15:02:51.698: INFO: Created: latency-svc-w8k9h Mar 8 15:02:51.706: INFO: Got endpoints: latency-svc-w8k9h [785.656337ms] Mar 8 15:02:51.733: INFO: Created: latency-svc-kstn9 Mar 8 15:02:51.742: INFO: Got endpoints: latency-svc-kstn9 [729.719534ms] Mar 8 15:02:51.809: INFO: Created: latency-svc-prn2x Mar 8 15:02:51.812: INFO: Got endpoints: latency-svc-prn2x [792.33321ms] Mar 8 15:02:51.832: INFO: Created: latency-svc-tn6tc Mar 8 15:02:51.839: INFO: Got endpoints: latency-svc-tn6tc [772.069464ms] Mar 8 15:02:51.862: INFO: Created: latency-svc-lpgnl Mar 8 15:02:51.870: INFO: Got endpoints: latency-svc-lpgnl [778.33322ms] Mar 8 15:02:51.890: INFO: Created: latency-svc-6p6k7 Mar 8 15:02:51.901: INFO: Got endpoints: latency-svc-6p6k7 [733.221432ms] Mar 8 15:02:51.965: INFO: Created: latency-svc-nzrkl Mar 8 15:02:51.967: INFO: Got endpoints: latency-svc-nzrkl [791.846019ms] Mar 8 15:02:52.027: INFO: Created: latency-svc-7fwhz Mar 8 15:02:52.032: INFO: Got endpoints: latency-svc-7fwhz [820.153643ms] Mar 8 15:02:52.059: INFO: Created: latency-svc-r4zqv Mar 8 15:02:52.062: INFO: Got endpoints: latency-svc-r4zqv [812.187715ms] Mar 8 15:02:52.108: INFO: Created: latency-svc-zhpc5 Mar 8 15:02:52.111: INFO: Got endpoints: latency-svc-zhpc5 [777.387169ms] Mar 8 15:02:52.137: INFO: Created: latency-svc-9wgkp Mar 8 15:02:52.140: INFO: Got endpoints: latency-svc-9wgkp [722.922979ms] Mar 8 15:02:52.162: INFO: Created: latency-svc-rnq8q Mar 8 15:02:52.171: INFO: Got endpoints: latency-svc-rnq8q [657.909637ms] Mar 8 15:02:52.204: INFO: Created: latency-svc-2m9ln Mar 8 15:02:52.282: INFO: Created: latency-svc-w6nz7 Mar 8 15:02:52.282: INFO: Got endpoints: latency-svc-2m9ln [745.316906ms] Mar 8 15:02:52.292: INFO: Got endpoints: latency-svc-w6nz7 [694.545081ms] Mar 8 15:02:52.322: INFO: Created: latency-svc-dvhmw Mar 8 15:02:52.328: INFO: Got endpoints: latency-svc-dvhmw [665.853772ms] Mar 8 15:02:52.360: INFO: Created: latency-svc-dlxc9 Mar 8 15:02:52.364: INFO: Got endpoints: latency-svc-dlxc9 [657.893682ms] Mar 8 15:02:52.426: INFO: Created: latency-svc-zlfjl Mar 8 15:02:52.428: INFO: Got endpoints: latency-svc-zlfjl [685.931203ms] Mar 8 15:02:52.451: INFO: Created: latency-svc-ptcw8 Mar 8 15:02:52.455: INFO: Got endpoints: latency-svc-ptcw8 [643.151089ms] Mar 8 15:02:52.478: INFO: Created: latency-svc-hnvts Mar 8 15:02:52.497: INFO: Got endpoints: latency-svc-hnvts [657.723627ms] Mar 8 15:02:52.515: INFO: Created: latency-svc-fp5xc Mar 8 15:02:52.522: INFO: Got endpoints: latency-svc-fp5xc [652.083013ms] Mar 8 15:02:52.570: INFO: Created: latency-svc-fd2lr Mar 8 15:02:52.576: INFO: Got endpoints: latency-svc-fd2lr [674.186069ms] Mar 8 15:02:52.607: INFO: Created: latency-svc-kwt4v Mar 8 15:02:52.618: INFO: Got endpoints: latency-svc-kwt4v [650.297822ms] Mar 8 15:02:52.658: INFO: Created: latency-svc-72x27 Mar 8 15:02:52.731: INFO: Got endpoints: latency-svc-72x27 [698.768828ms] Mar 8 15:02:52.733: INFO: Created: latency-svc-dr4n6 Mar 8 15:02:52.745: INFO: Got endpoints: latency-svc-dr4n6 [682.47005ms] Mar 8 15:02:52.781: INFO: Created: latency-svc-d66n7 Mar 8 15:02:52.793: INFO: Got endpoints: latency-svc-d66n7 [682.355691ms] Mar 8 15:02:52.912: INFO: Created: latency-svc-wf48q Mar 8 15:02:52.918: INFO: Got endpoints: latency-svc-wf48q [777.459104ms] Mar 8 15:02:52.951: INFO: Created: latency-svc-vsq6b Mar 8 15:02:52.961: INFO: Got endpoints: latency-svc-vsq6b [789.677388ms] Mar 8 15:02:53.067: INFO: Created: latency-svc-dg79w Mar 8 15:02:53.070: INFO: Got endpoints: latency-svc-dg79w [787.58898ms] Mar 8 15:02:53.096: INFO: Created: latency-svc-msmtz Mar 8 15:02:53.110: INFO: Got endpoints: latency-svc-msmtz [818.155681ms] Mar 8 15:02:53.146: INFO: Created: latency-svc-vnsvd Mar 8 15:02:53.282: INFO: Got endpoints: latency-svc-vnsvd [954.164291ms] Mar 8 15:02:53.284: INFO: Created: latency-svc-cjd8k Mar 8 15:02:53.292: INFO: Got endpoints: latency-svc-cjd8k [927.399065ms] Mar 8 15:02:53.358: INFO: Created: latency-svc-h4vk5 Mar 8 15:02:53.364: INFO: Got endpoints: latency-svc-h4vk5 [935.80853ms] Mar 8 15:02:53.487: INFO: Created: latency-svc-sbn52 Mar 8 15:02:53.490: INFO: Got endpoints: latency-svc-sbn52 [1.035132473s] Mar 8 15:02:53.528: INFO: Created: latency-svc-nnvdv Mar 8 15:02:53.532: INFO: Got endpoints: latency-svc-nnvdv [1.035606354s] Mar 8 15:02:53.551: INFO: Created: latency-svc-csr5z Mar 8 15:02:53.557: INFO: Got endpoints: latency-svc-csr5z [1.035157394s] Mar 8 15:02:53.575: INFO: Created: latency-svc-qhjg8 Mar 8 15:02:53.581: INFO: Got endpoints: latency-svc-qhjg8 [1.005342923s] Mar 8 15:02:53.636: INFO: Created: latency-svc-q92hp Mar 8 15:02:53.641: INFO: Got endpoints: latency-svc-q92hp [1.023102558s] Mar 8 15:02:53.696: INFO: Created: latency-svc-cqgxd Mar 8 15:02:53.714: INFO: Got endpoints: latency-svc-cqgxd [982.70068ms] Mar 8 15:02:53.797: INFO: Created: latency-svc-bwwgc Mar 8 15:02:53.846: INFO: Got endpoints: latency-svc-bwwgc [1.101459208s] Mar 8 15:02:53.847: INFO: Created: latency-svc-7fq9t Mar 8 15:02:53.880: INFO: Got endpoints: latency-svc-7fq9t [1.086831865s] Mar 8 15:02:53.954: INFO: Created: latency-svc-6w6ds Mar 8 15:02:53.957: INFO: Got endpoints: latency-svc-6w6ds [1.039022742s] Mar 8 15:02:54.009: INFO: Created: latency-svc-vsm5x Mar 8 15:02:54.021: INFO: Got endpoints: latency-svc-vsm5x [1.060133849s] Mar 8 15:02:54.051: INFO: Created: latency-svc-8rv57 Mar 8 15:02:54.108: INFO: Got endpoints: latency-svc-8rv57 [1.038412385s] Mar 8 15:02:54.133: INFO: Created: latency-svc-wq6fp Mar 8 15:02:54.135: INFO: Got endpoints: latency-svc-wq6fp [1.024342981s] Mar 8 15:02:54.171: INFO: Created: latency-svc-tphzk Mar 8 15:02:54.203: INFO: Got endpoints: latency-svc-tphzk [921.099874ms] Mar 8 15:02:54.264: INFO: Created: latency-svc-w76fq Mar 8 15:02:54.266: INFO: Got endpoints: latency-svc-w76fq [974.633513ms] Mar 8 15:02:54.315: INFO: Created: latency-svc-22lvt Mar 8 15:02:54.321: INFO: Got endpoints: latency-svc-22lvt [118.202627ms] Mar 8 15:02:54.358: INFO: Created: latency-svc-jpvj2 Mar 8 15:02:54.408: INFO: Got endpoints: latency-svc-jpvj2 [1.04333062s] Mar 8 15:02:54.410: INFO: Created: latency-svc-dm4sl Mar 8 15:02:54.412: INFO: Got endpoints: latency-svc-dm4sl [922.385858ms] Mar 8 15:02:54.439: INFO: Created: latency-svc-44d9j Mar 8 15:02:54.443: INFO: Got endpoints: latency-svc-44d9j [910.00593ms] Mar 8 15:02:54.463: INFO: Created: latency-svc-cvpft Mar 8 15:02:54.474: INFO: Got endpoints: latency-svc-cvpft [917.07481ms] Mar 8 15:02:54.493: INFO: Created: latency-svc-sr9v6 Mar 8 15:02:54.497: INFO: Got endpoints: latency-svc-sr9v6 [915.755023ms] Mar 8 15:02:54.588: INFO: Created: latency-svc-jr27q Mar 8 15:02:54.590: INFO: Got endpoints: latency-svc-jr27q [949.39906ms] Mar 8 15:02:54.633: INFO: Created: latency-svc-rrml5 Mar 8 15:02:54.642: INFO: Got endpoints: latency-svc-rrml5 [927.650196ms] Mar 8 15:02:54.667: INFO: Created: latency-svc-c79cw Mar 8 15:02:54.672: INFO: Got endpoints: latency-svc-c79cw [825.492989ms] Mar 8 15:02:54.737: INFO: Created: latency-svc-k47h7 Mar 8 15:02:54.759: INFO: Got endpoints: latency-svc-k47h7 [879.160807ms] Mar 8 15:02:54.759: INFO: Created: latency-svc-zhvht Mar 8 15:02:54.769: INFO: Got endpoints: latency-svc-zhvht [811.441091ms] Mar 8 15:02:54.789: INFO: Created: latency-svc-n52fv Mar 8 15:02:54.799: INFO: Got endpoints: latency-svc-n52fv [777.379339ms] Mar 8 15:02:54.819: INFO: Created: latency-svc-2n2cs Mar 8 15:02:54.823: INFO: Got endpoints: latency-svc-2n2cs [714.590021ms] Mar 8 15:02:54.887: INFO: Created: latency-svc-jlwb8 Mar 8 15:02:54.889: INFO: Got endpoints: latency-svc-jlwb8 [754.106014ms] Mar 8 15:02:54.919: INFO: Created: latency-svc-w75ds Mar 8 15:02:54.930: INFO: Got endpoints: latency-svc-w75ds [663.880687ms] Mar 8 15:02:54.956: INFO: Created: latency-svc-mplxq Mar 8 15:02:54.961: INFO: Got endpoints: latency-svc-mplxq [639.94658ms] Mar 8 15:02:54.987: INFO: Created: latency-svc-57z9w Mar 8 15:02:55.037: INFO: Got endpoints: latency-svc-57z9w [629.047988ms] Mar 8 15:02:55.038: INFO: Created: latency-svc-xsqs8 Mar 8 15:02:55.040: INFO: Got endpoints: latency-svc-xsqs8 [627.596304ms] Mar 8 15:02:55.071: INFO: Created: latency-svc-2lxq8 Mar 8 15:02:55.077: INFO: Got endpoints: latency-svc-2lxq8 [633.969982ms] Mar 8 15:02:55.099: INFO: Created: latency-svc-8wk5l Mar 8 15:02:55.102: INFO: Got endpoints: latency-svc-8wk5l [627.543155ms] Mar 8 15:02:55.135: INFO: Created: latency-svc-bnh9x Mar 8 15:02:55.180: INFO: Got endpoints: latency-svc-bnh9x [683.372047ms] Mar 8 15:02:55.183: INFO: Created: latency-svc-5qs2v Mar 8 15:02:55.191: INFO: Got endpoints: latency-svc-5qs2v [600.816411ms] Mar 8 15:02:55.215: INFO: Created: latency-svc-29hjc Mar 8 15:02:55.221: INFO: Got endpoints: latency-svc-29hjc [579.825538ms] Mar 8 15:02:55.239: INFO: Created: latency-svc-wd2l5 Mar 8 15:02:55.246: INFO: Got endpoints: latency-svc-wd2l5 [574.471681ms] Mar 8 15:02:55.263: INFO: Created: latency-svc-m249d Mar 8 15:02:55.331: INFO: Created: latency-svc-7x85x Mar 8 15:02:55.357: INFO: Created: latency-svc-bb92p Mar 8 15:02:55.357: INFO: Got endpoints: latency-svc-m249d [597.63392ms] Mar 8 15:02:55.360: INFO: Got endpoints: latency-svc-bb92p [561.854364ms] Mar 8 15:02:55.406: INFO: Got endpoints: latency-svc-7x85x [637.810824ms] Mar 8 15:02:55.407: INFO: Created: latency-svc-rzzpc Mar 8 15:02:55.415: INFO: Got endpoints: latency-svc-rzzpc [591.451416ms] Mar 8 15:02:55.492: INFO: Created: latency-svc-wcrkn Mar 8 15:02:55.494: INFO: Got endpoints: latency-svc-wcrkn [604.589694ms] Mar 8 15:02:55.530: INFO: Created: latency-svc-7c8kg Mar 8 15:02:55.535: INFO: Got endpoints: latency-svc-7c8kg [605.037579ms] Mar 8 15:02:55.555: INFO: Created: latency-svc-f9s6n Mar 8 15:02:55.560: INFO: Got endpoints: latency-svc-f9s6n [598.355341ms] Mar 8 15:02:55.581: INFO: Created: latency-svc-xdg5k Mar 8 15:02:55.590: INFO: Got endpoints: latency-svc-xdg5k [553.291524ms] Mar 8 15:02:55.642: INFO: Created: latency-svc-js6j7 Mar 8 15:02:55.644: INFO: Got endpoints: latency-svc-js6j7 [603.410048ms] Mar 8 15:02:55.687: INFO: Created: latency-svc-9kqr9 Mar 8 15:02:55.692: INFO: Got endpoints: latency-svc-9kqr9 [615.613969ms] Mar 8 15:02:55.723: INFO: Created: latency-svc-fsnlc Mar 8 15:02:55.728: INFO: Got endpoints: latency-svc-fsnlc [626.663824ms] Mar 8 15:02:55.785: INFO: Created: latency-svc-wz5ll Mar 8 15:02:55.788: INFO: Got endpoints: latency-svc-wz5ll [607.526653ms] Mar 8 15:02:55.809: INFO: Created: latency-svc-rkmcl Mar 8 15:02:55.827: INFO: Got endpoints: latency-svc-rkmcl [635.348891ms] Mar 8 15:02:55.845: INFO: Created: latency-svc-8f8bj Mar 8 15:02:55.850: INFO: Got endpoints: latency-svc-8f8bj [628.501689ms] Mar 8 15:02:55.866: INFO: Created: latency-svc-qxvgr Mar 8 15:02:55.868: INFO: Got endpoints: latency-svc-qxvgr [621.910603ms] Mar 8 15:02:55.941: INFO: Created: latency-svc-9b4ph Mar 8 15:02:55.943: INFO: Got endpoints: latency-svc-9b4ph [585.970517ms] Mar 8 15:02:55.965: INFO: Created: latency-svc-9d59d Mar 8 15:02:55.970: INFO: Got endpoints: latency-svc-9d59d [609.799107ms] Mar 8 15:02:55.990: INFO: Created: latency-svc-778m5 Mar 8 15:02:55.995: INFO: Got endpoints: latency-svc-778m5 [588.377142ms] Mar 8 15:02:56.012: INFO: Created: latency-svc-9pqqv Mar 8 15:02:56.019: INFO: Got endpoints: latency-svc-9pqqv [604.647594ms] Mar 8 15:02:56.037: INFO: Created: latency-svc-2fdvc Mar 8 15:02:56.090: INFO: Got endpoints: latency-svc-2fdvc [596.539739ms] Mar 8 15:02:56.093: INFO: Created: latency-svc-jgkhw Mar 8 15:02:56.104: INFO: Got endpoints: latency-svc-jgkhw [568.404155ms] Mar 8 15:02:56.133: INFO: Created: latency-svc-scchx Mar 8 15:02:56.151: INFO: Got endpoints: latency-svc-scchx [591.062037ms] Mar 8 15:02:56.175: INFO: Created: latency-svc-xxwnp Mar 8 15:02:56.240: INFO: Got endpoints: latency-svc-xxwnp [649.959271ms] Mar 8 15:02:56.242: INFO: Created: latency-svc-89q9s Mar 8 15:02:56.257: INFO: Got endpoints: latency-svc-89q9s [613.446856ms] Mar 8 15:02:56.275: INFO: Created: latency-svc-ncxpz Mar 8 15:02:56.279: INFO: Got endpoints: latency-svc-ncxpz [586.513012ms] Mar 8 15:02:56.318: INFO: Created: latency-svc-7mqrc Mar 8 15:02:56.327: INFO: Got endpoints: latency-svc-7mqrc [598.448711ms] Mar 8 15:02:56.408: INFO: Created: latency-svc-mgth2 Mar 8 15:02:56.417: INFO: Got endpoints: latency-svc-mgth2 [629.377687ms] Mar 8 15:02:56.442: INFO: Created: latency-svc-6kh4f Mar 8 15:02:56.447: INFO: Got endpoints: latency-svc-6kh4f [620.595198ms] Mar 8 15:02:56.467: INFO: Created: latency-svc-gvrnf Mar 8 15:02:56.472: INFO: Got endpoints: latency-svc-gvrnf [621.624483ms] Mar 8 15:02:56.491: INFO: Created: latency-svc-ng9hq Mar 8 15:02:56.496: INFO: Got endpoints: latency-svc-ng9hq [627.771914ms] Mar 8 15:02:56.582: INFO: Created: latency-svc-7sxfd Mar 8 15:02:56.583: INFO: Got endpoints: latency-svc-7sxfd [640.188786ms] Mar 8 15:02:56.611: INFO: Created: latency-svc-j7zbf Mar 8 15:02:56.617: INFO: Got endpoints: latency-svc-j7zbf [646.497775ms] Mar 8 15:02:56.636: INFO: Created: latency-svc-bhdz8 Mar 8 15:02:56.641: INFO: Got endpoints: latency-svc-bhdz8 [645.93011ms] Mar 8 15:02:56.641: INFO: Latencies: [59.67852ms 118.202627ms 119.21325ms 137.945826ms 174.096443ms 209.742481ms 271.255311ms 342.687587ms 400.048513ms 462.012961ms 492.665101ms 553.291524ms 561.854364ms 568.404155ms 574.471681ms 577.497161ms 579.825538ms 585.970517ms 586.513012ms 588.377142ms 591.062037ms 591.451416ms 596.539739ms 597.63392ms 598.355341ms 598.448711ms 600.816411ms 603.410048ms 604.589694ms 604.647594ms 605.037579ms 607.526653ms 609.799107ms 613.446856ms 615.613969ms 618.660614ms 620.595198ms 621.260774ms 621.624483ms 621.910603ms 626.663824ms 627.543155ms 627.596304ms 627.771914ms 627.815324ms 628.501689ms 629.047988ms 629.377687ms 633.969982ms 635.348891ms 637.810824ms 637.994788ms 639.94658ms 640.188786ms 641.52547ms 643.151089ms 645.93011ms 646.497775ms 649.959271ms 650.297822ms 651.694185ms 652.083013ms 657.191743ms 657.723627ms 657.893682ms 657.909637ms 663.880687ms 665.853772ms 674.186069ms 682.355691ms 682.47005ms 683.372047ms 684.020083ms 685.931203ms 694.545081ms 698.768828ms 698.83588ms 704.423911ms 714.590021ms 722.922979ms 729.719534ms 733.221432ms 742.543508ms 745.316906ms 747.403653ms 748.597753ms 754.106014ms 759.682578ms 763.599645ms 765.896492ms 772.069464ms 775.694899ms 776.263944ms 777.379339ms 777.387169ms 777.459104ms 778.33322ms 785.656337ms 786.725555ms 787.58898ms 789.330471ms 789.677388ms 791.846019ms 792.33321ms 801.99954ms 808.782289ms 811.441091ms 812.187715ms 813.444291ms 818.155681ms 819.888134ms 820.150788ms 820.153643ms 825.492989ms 830.265774ms 830.983912ms 831.615272ms 835.6599ms 855.665966ms 855.982806ms 866.958301ms 875.303479ms 879.160807ms 893.68209ms 905.374118ms 910.00593ms 915.755023ms 917.07481ms 918.951961ms 921.099874ms 922.385858ms 926.946543ms 927.399065ms 927.650196ms 935.80853ms 938.716628ms 949.39906ms 954.164291ms 962.611981ms 964.806263ms 969.606157ms 974.633513ms 975.603483ms 981.968472ms 982.70068ms 993.633094ms 1.005342923s 1.011282126s 1.022705007s 1.023102558s 1.024342981s 1.035132473s 1.035157394s 1.035606354s 1.038412385s 1.039022742s 1.04333062s 1.052897281s 1.053994737s 1.060133849s 1.065195899s 1.069417541s 1.070158841s 1.073280908s 1.073354919s 1.073863853s 1.082916613s 1.086831865s 1.101459208s 1.109122391s 1.118412938s 1.131100215s 1.157399974s 1.162157853s 1.190508595s 1.217435015s 1.221858016s 1.24504969s 1.249499376s 1.254698409s 1.262467354s 1.28515956s 1.286836846s 1.287497086s 1.290275824s 1.292650801s 1.293770781s 1.302297125s 1.304252837s 1.305107359s 1.308963343s 1.315060178s 1.3221697s 1.339454166s 1.339808892s 1.370511165s 1.388426421s 1.442677364s 1.454130136s 1.459713711s] Mar 8 15:02:56.641: INFO: 50 %ile: 789.330471ms Mar 8 15:02:56.641: INFO: 90 %ile: 1.262467354s Mar 8 15:02:56.641: INFO: 99 %ile: 1.454130136s Mar 8 15:02:56.641: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:02:56.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-zvhwg" for this suite. Mar 8 15:03:18.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:03:18.736: INFO: namespace: e2e-tests-svc-latency-zvhwg, resource: bindings, ignored listing per whitelist Mar 8 15:03:18.755: INFO: namespace e2e-tests-svc-latency-zvhwg deletion completed in 22.108231827s • [SLOW TEST:35.865 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:03:18.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 8 15:03:18.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-jnw8b' Mar 8 15:03:18.945: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 15:03:18.945: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Mar 8 15:03:18.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-jnw8b' Mar 8 15:03:19.039: INFO: stderr: "" Mar 8 15:03:19.039: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:03:19.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jnw8b" for this suite. Mar 8 15:03:41.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:03:41.063: INFO: namespace: e2e-tests-kubectl-jnw8b, resource: bindings, ignored listing per whitelist Mar 8 15:03:41.137: INFO: namespace e2e-tests-kubectl-jnw8b deletion completed in 22.092734754s • [SLOW TEST:22.382 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:03:41.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 8 15:03:53.293: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 15:03:53.297: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 15:03:55.297: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 15:03:55.301: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 15:03:57.297: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 15:03:57.301: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 15:03:59.297: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 15:03:59.301: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 15:04:01.297: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 15:04:01.301: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 15:04:03.297: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 15:04:03.301: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 15:04:05.297: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 15:04:05.301: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 15:04:07.297: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 15:04:07.301: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 15:04:09.297: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 15:04:09.300: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 15:04:11.297: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 15:04:11.301: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 15:04:13.297: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 15:04:13.301: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 15:04:15.297: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 15:04:15.300: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 15:04:17.297: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 15:04:17.301: INFO: Pod pod-with-prestop-exec-hook still exists Mar 8 15:04:19.297: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 8 15:04:19.299: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:04:19.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-99z9b" for this suite. Mar 8 15:04:41.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:04:41.399: INFO: namespace: e2e-tests-container-lifecycle-hook-99z9b, resource: bindings, ignored listing per whitelist Mar 8 15:04:41.414: INFO: namespace e2e-tests-container-lifecycle-hook-99z9b deletion completed in 22.108269725s • [SLOW TEST:60.276 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:04:41.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-233f52fe-614e-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume secrets Mar 8 15:04:41.504: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-233fdef6-614e-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-9vsxw" to be "success or failure" Mar 8 15:04:41.507: INFO: Pod "pod-projected-secrets-233fdef6-614e-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.338688ms Mar 8 15:04:43.511: INFO: Pod "pod-projected-secrets-233fdef6-614e-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007522971s STEP: Saw pod success Mar 8 15:04:43.511: INFO: Pod "pod-projected-secrets-233fdef6-614e-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:04:43.514: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-233fdef6-614e-11ea-b38e-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Mar 8 15:04:43.621: INFO: Waiting for pod pod-projected-secrets-233fdef6-614e-11ea-b38e-0242ac11000f to disappear Mar 8 15:04:43.627: INFO: Pod pod-projected-secrets-233fdef6-614e-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:04:43.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-9vsxw" for this suite. Mar 8 15:04:49.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:04:49.717: INFO: namespace: e2e-tests-projected-9vsxw, resource: bindings, ignored listing per whitelist Mar 8 15:04:49.740: INFO: namespace e2e-tests-projected-9vsxw deletion completed in 6.110024508s • [SLOW TEST:8.326 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:04:49.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-2838f091-614e-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume secrets Mar 8 15:04:49.889: INFO: Waiting up to 5m0s for pod "pod-secrets-2839a645-614e-11ea-b38e-0242ac11000f" in namespace "e2e-tests-secrets-t4fvh" to be "success or failure" Mar 8 15:04:49.891: INFO: Pod "pod-secrets-2839a645-614e-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.737319ms Mar 8 15:04:51.919: INFO: Pod "pod-secrets-2839a645-614e-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.030179976s STEP: Saw pod success Mar 8 15:04:51.919: INFO: Pod "pod-secrets-2839a645-614e-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:04:51.921: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-2839a645-614e-11ea-b38e-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 8 15:04:51.940: INFO: Waiting for pod pod-secrets-2839a645-614e-11ea-b38e-0242ac11000f to disappear Mar 8 15:04:51.948: INFO: Pod pod-secrets-2839a645-614e-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:04:51.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-t4fvh" for this suite. Mar 8 15:04:57.976: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:04:58.015: INFO: namespace: e2e-tests-secrets-t4fvh, resource: bindings, ignored listing per whitelist Mar 8 15:04:58.061: INFO: namespace e2e-tests-secrets-t4fvh deletion completed in 6.109643326s • [SLOW TEST:8.320 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:04:58.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 8 15:04:58.163: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:04:59.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-4brs5" for this suite. Mar 8 15:05:05.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:05:05.333: INFO: namespace: e2e-tests-custom-resource-definition-4brs5, resource: bindings, ignored listing per whitelist Mar 8 15:05:05.357: INFO: namespace e2e-tests-custom-resource-definition-4brs5 deletion completed in 6.128727914s • [SLOW TEST:7.296 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:05:05.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 8 15:05:05.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-hkscp' Mar 8 15:05:05.530: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 15:05:05.530: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404 Mar 8 15:05:09.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-hkscp' Mar 8 15:05:09.683: INFO: stderr: "" Mar 8 15:05:09.683: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:05:09.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hkscp" for this suite. Mar 8 15:05:31.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:05:31.768: INFO: namespace: e2e-tests-kubectl-hkscp, resource: bindings, ignored listing per whitelist Mar 8 15:05:31.813: INFO: namespace e2e-tests-kubectl-hkscp deletion completed in 22.124708418s • [SLOW TEST:26.455 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:05:31.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-lfjt6 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace e2e-tests-statefulset-lfjt6 STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-lfjt6 Mar 8 15:05:31.943: INFO: Found 0 stateful pods, waiting for 1 Mar 8 15:05:41.947: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 8 15:05:41.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lfjt6 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 8 15:05:42.199: INFO: stderr: "I0308 15:05:42.096667 1094 log.go:172] (0xc00013a790) (0xc0006714a0) Create stream\nI0308 15:05:42.096723 1094 log.go:172] (0xc00013a790) (0xc0006714a0) Stream added, broadcasting: 1\nI0308 15:05:42.098903 1094 log.go:172] (0xc00013a790) Reply frame received for 1\nI0308 15:05:42.098935 1094 log.go:172] (0xc00013a790) (0xc000671540) Create stream\nI0308 15:05:42.098943 1094 log.go:172] (0xc00013a790) (0xc000671540) Stream added, broadcasting: 3\nI0308 15:05:42.099840 1094 log.go:172] (0xc00013a790) Reply frame received for 3\nI0308 15:05:42.099894 1094 log.go:172] (0xc00013a790) (0xc000708000) Create stream\nI0308 15:05:42.099905 1094 log.go:172] (0xc00013a790) (0xc000708000) Stream added, broadcasting: 5\nI0308 15:05:42.100820 1094 log.go:172] (0xc00013a790) Reply frame received for 5\nI0308 15:05:42.194589 1094 log.go:172] (0xc00013a790) Data frame received for 3\nI0308 15:05:42.194631 1094 log.go:172] (0xc000671540) (3) Data frame handling\nI0308 15:05:42.194659 1094 log.go:172] (0xc000671540) (3) Data frame sent\nI0308 15:05:42.194839 1094 log.go:172] (0xc00013a790) Data frame received for 5\nI0308 15:05:42.194866 1094 log.go:172] (0xc000708000) (5) Data frame handling\nI0308 15:05:42.195031 1094 log.go:172] (0xc00013a790) Data frame received for 3\nI0308 15:05:42.195051 1094 log.go:172] (0xc000671540) (3) Data frame handling\nI0308 15:05:42.196445 1094 log.go:172] (0xc00013a790) Data frame received for 1\nI0308 15:05:42.196464 1094 log.go:172] (0xc0006714a0) (1) Data frame handling\nI0308 15:05:42.196478 1094 log.go:172] (0xc0006714a0) (1) Data frame sent\nI0308 15:05:42.196488 1094 log.go:172] (0xc00013a790) (0xc0006714a0) Stream removed, broadcasting: 1\nI0308 15:05:42.196598 1094 log.go:172] (0xc00013a790) Go away received\nI0308 15:05:42.196737 1094 log.go:172] (0xc00013a790) (0xc0006714a0) Stream removed, broadcasting: 1\nI0308 15:05:42.196761 1094 log.go:172] (0xc00013a790) (0xc000671540) Stream removed, broadcasting: 3\nI0308 15:05:42.196778 1094 log.go:172] (0xc00013a790) (0xc000708000) Stream removed, broadcasting: 5\n" Mar 8 15:05:42.199: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 8 15:05:42.199: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 8 15:05:42.203: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 8 15:05:52.207: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 8 15:05:52.207: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 15:05:52.222: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999427s Mar 8 15:05:53.227: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996682676s Mar 8 15:05:54.231: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.991988794s Mar 8 15:05:55.235: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.987297045s Mar 8 15:05:56.240: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.983148492s Mar 8 15:05:57.245: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.97864714s Mar 8 15:05:58.250: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.974066988s Mar 8 15:05:59.255: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.968847226s Mar 8 15:06:00.259: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.964138519s Mar 8 15:06:01.264: INFO: Verifying statefulset ss doesn't scale past 1 for another 959.384461ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-lfjt6 Mar 8 15:06:02.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lfjt6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 15:06:02.464: INFO: stderr: "I0308 15:06:02.408895 1116 log.go:172] (0xc0005b6420) (0xc0005fd360) Create stream\nI0308 15:06:02.408951 1116 log.go:172] (0xc0005b6420) (0xc0005fd360) Stream added, broadcasting: 1\nI0308 15:06:02.411086 1116 log.go:172] (0xc0005b6420) Reply frame received for 1\nI0308 15:06:02.411120 1116 log.go:172] (0xc0005b6420) (0xc0005fd400) Create stream\nI0308 15:06:02.411128 1116 log.go:172] (0xc0005b6420) (0xc0005fd400) Stream added, broadcasting: 3\nI0308 15:06:02.412151 1116 log.go:172] (0xc0005b6420) Reply frame received for 3\nI0308 15:06:02.412175 1116 log.go:172] (0xc0005b6420) (0xc0005fd4a0) Create stream\nI0308 15:06:02.412181 1116 log.go:172] (0xc0005b6420) (0xc0005fd4a0) Stream added, broadcasting: 5\nI0308 15:06:02.412879 1116 log.go:172] (0xc0005b6420) Reply frame received for 5\nI0308 15:06:02.457601 1116 log.go:172] (0xc0005b6420) Data frame received for 3\nI0308 15:06:02.457658 1116 log.go:172] (0xc0005fd400) (3) Data frame handling\nI0308 15:06:02.457674 1116 log.go:172] (0xc0005fd400) (3) Data frame sent\nI0308 15:06:02.457745 1116 log.go:172] (0xc0005b6420) Data frame received for 3\nI0308 15:06:02.457769 1116 log.go:172] (0xc0005fd400) (3) Data frame handling\nI0308 15:06:02.457879 1116 log.go:172] (0xc0005b6420) Data frame received for 5\nI0308 15:06:02.457889 1116 log.go:172] (0xc0005fd4a0) (5) Data frame handling\nI0308 15:06:02.461874 1116 log.go:172] (0xc0005b6420) Data frame received for 1\nI0308 15:06:02.462012 1116 log.go:172] (0xc0005fd360) (1) Data frame handling\nI0308 15:06:02.462030 1116 log.go:172] (0xc0005fd360) (1) Data frame sent\nI0308 15:06:02.462046 1116 log.go:172] (0xc0005b6420) (0xc0005fd360) Stream removed, broadcasting: 1\nI0308 15:06:02.462060 1116 log.go:172] (0xc0005b6420) Go away received\nI0308 15:06:02.462326 1116 log.go:172] (0xc0005b6420) (0xc0005fd360) Stream removed, broadcasting: 1\nI0308 15:06:02.462351 1116 log.go:172] (0xc0005b6420) (0xc0005fd400) Stream removed, broadcasting: 3\nI0308 15:06:02.462365 1116 log.go:172] (0xc0005b6420) (0xc0005fd4a0) Stream removed, broadcasting: 5\n" Mar 8 15:06:02.464: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 8 15:06:02.464: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 8 15:06:02.468: INFO: Found 1 stateful pods, waiting for 3 Mar 8 15:06:12.474: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 15:06:12.474: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 15:06:12.474: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 8 15:06:12.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lfjt6 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 8 15:06:12.677: INFO: stderr: "I0308 15:06:12.615659 1139 log.go:172] (0xc0004282c0) (0xc000489220) Create stream\nI0308 15:06:12.615717 1139 log.go:172] (0xc0004282c0) (0xc000489220) Stream added, broadcasting: 1\nI0308 15:06:12.620955 1139 log.go:172] (0xc0004282c0) Reply frame received for 1\nI0308 15:06:12.621011 1139 log.go:172] (0xc0004282c0) (0xc00070c000) Create stream\nI0308 15:06:12.621032 1139 log.go:172] (0xc0004282c0) (0xc00070c000) Stream added, broadcasting: 3\nI0308 15:06:12.621999 1139 log.go:172] (0xc0004282c0) Reply frame received for 3\nI0308 15:06:12.622042 1139 log.go:172] (0xc0004282c0) (0xc0004892c0) Create stream\nI0308 15:06:12.622054 1139 log.go:172] (0xc0004282c0) (0xc0004892c0) Stream added, broadcasting: 5\nI0308 15:06:12.622959 1139 log.go:172] (0xc0004282c0) Reply frame received for 5\nI0308 15:06:12.673540 1139 log.go:172] (0xc0004282c0) Data frame received for 5\nI0308 15:06:12.673568 1139 log.go:172] (0xc0004892c0) (5) Data frame handling\nI0308 15:06:12.673601 1139 log.go:172] (0xc0004282c0) Data frame received for 3\nI0308 15:06:12.673624 1139 log.go:172] (0xc00070c000) (3) Data frame handling\nI0308 15:06:12.673637 1139 log.go:172] (0xc00070c000) (3) Data frame sent\nI0308 15:06:12.673644 1139 log.go:172] (0xc0004282c0) Data frame received for 3\nI0308 15:06:12.673648 1139 log.go:172] (0xc00070c000) (3) Data frame handling\nI0308 15:06:12.674760 1139 log.go:172] (0xc0004282c0) Data frame received for 1\nI0308 15:06:12.674780 1139 log.go:172] (0xc000489220) (1) Data frame handling\nI0308 15:06:12.674799 1139 log.go:172] (0xc000489220) (1) Data frame sent\nI0308 15:06:12.674878 1139 log.go:172] (0xc0004282c0) (0xc000489220) Stream removed, broadcasting: 1\nI0308 15:06:12.675047 1139 log.go:172] (0xc0004282c0) (0xc000489220) Stream removed, broadcasting: 1\nI0308 15:06:12.675064 1139 log.go:172] (0xc0004282c0) (0xc00070c000) Stream removed, broadcasting: 3\nI0308 15:06:12.675238 1139 log.go:172] (0xc0004282c0) Go away received\nI0308 15:06:12.675278 1139 log.go:172] (0xc0004282c0) (0xc0004892c0) Stream removed, broadcasting: 5\n" Mar 8 15:06:12.677: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 8 15:06:12.677: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 8 15:06:12.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lfjt6 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 8 15:06:12.879: INFO: stderr: "I0308 15:06:12.795330 1161 log.go:172] (0xc0008262c0) (0xc000722640) Create stream\nI0308 15:06:12.795374 1161 log.go:172] (0xc0008262c0) (0xc000722640) Stream added, broadcasting: 1\nI0308 15:06:12.797357 1161 log.go:172] (0xc0008262c0) Reply frame received for 1\nI0308 15:06:12.797399 1161 log.go:172] (0xc0008262c0) (0xc0007a4d20) Create stream\nI0308 15:06:12.797417 1161 log.go:172] (0xc0008262c0) (0xc0007a4d20) Stream added, broadcasting: 3\nI0308 15:06:12.798056 1161 log.go:172] (0xc0008262c0) Reply frame received for 3\nI0308 15:06:12.798091 1161 log.go:172] (0xc0008262c0) (0xc0005fc000) Create stream\nI0308 15:06:12.798108 1161 log.go:172] (0xc0008262c0) (0xc0005fc000) Stream added, broadcasting: 5\nI0308 15:06:12.799011 1161 log.go:172] (0xc0008262c0) Reply frame received for 5\nI0308 15:06:12.875144 1161 log.go:172] (0xc0008262c0) Data frame received for 5\nI0308 15:06:12.875211 1161 log.go:172] (0xc0008262c0) Data frame received for 3\nI0308 15:06:12.875279 1161 log.go:172] (0xc0007a4d20) (3) Data frame handling\nI0308 15:06:12.875293 1161 log.go:172] (0xc0007a4d20) (3) Data frame sent\nI0308 15:06:12.875304 1161 log.go:172] (0xc0008262c0) Data frame received for 3\nI0308 15:06:12.875319 1161 log.go:172] (0xc0007a4d20) (3) Data frame handling\nI0308 15:06:12.875334 1161 log.go:172] (0xc0005fc000) (5) Data frame handling\nI0308 15:06:12.876705 1161 log.go:172] (0xc0008262c0) Data frame received for 1\nI0308 15:06:12.876721 1161 log.go:172] (0xc000722640) (1) Data frame handling\nI0308 15:06:12.876728 1161 log.go:172] (0xc000722640) (1) Data frame sent\nI0308 15:06:12.876747 1161 log.go:172] (0xc0008262c0) (0xc000722640) Stream removed, broadcasting: 1\nI0308 15:06:12.876780 1161 log.go:172] (0xc0008262c0) Go away received\nI0308 15:06:12.876911 1161 log.go:172] (0xc0008262c0) (0xc000722640) Stream removed, broadcasting: 1\nI0308 15:06:12.876923 1161 log.go:172] (0xc0008262c0) (0xc0007a4d20) Stream removed, broadcasting: 3\nI0308 15:06:12.876931 1161 log.go:172] (0xc0008262c0) (0xc0005fc000) Stream removed, broadcasting: 5\n" Mar 8 15:06:12.879: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 8 15:06:12.879: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 8 15:06:12.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lfjt6 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Mar 8 15:06:13.047: INFO: stderr: "I0308 15:06:12.974353 1183 log.go:172] (0xc000138580) (0xc0006e06e0) Create stream\nI0308 15:06:12.974387 1183 log.go:172] (0xc000138580) (0xc0006e06e0) Stream added, broadcasting: 1\nI0308 15:06:12.975782 1183 log.go:172] (0xc000138580) Reply frame received for 1\nI0308 15:06:12.975814 1183 log.go:172] (0xc000138580) (0xc0003843c0) Create stream\nI0308 15:06:12.975842 1183 log.go:172] (0xc000138580) (0xc0003843c0) Stream added, broadcasting: 3\nI0308 15:06:12.976800 1183 log.go:172] (0xc000138580) Reply frame received for 3\nI0308 15:06:12.976823 1183 log.go:172] (0xc000138580) (0xc00068cb40) Create stream\nI0308 15:06:12.976830 1183 log.go:172] (0xc000138580) (0xc00068cb40) Stream added, broadcasting: 5\nI0308 15:06:12.977453 1183 log.go:172] (0xc000138580) Reply frame received for 5\nI0308 15:06:13.042593 1183 log.go:172] (0xc000138580) Data frame received for 3\nI0308 15:06:13.042614 1183 log.go:172] (0xc0003843c0) (3) Data frame handling\nI0308 15:06:13.042621 1183 log.go:172] (0xc0003843c0) (3) Data frame sent\nI0308 15:06:13.042626 1183 log.go:172] (0xc000138580) Data frame received for 3\nI0308 15:06:13.042631 1183 log.go:172] (0xc0003843c0) (3) Data frame handling\nI0308 15:06:13.042732 1183 log.go:172] (0xc000138580) Data frame received for 5\nI0308 15:06:13.042742 1183 log.go:172] (0xc00068cb40) (5) Data frame handling\nI0308 15:06:13.045124 1183 log.go:172] (0xc000138580) Data frame received for 1\nI0308 15:06:13.045151 1183 log.go:172] (0xc0006e06e0) (1) Data frame handling\nI0308 15:06:13.045171 1183 log.go:172] (0xc0006e06e0) (1) Data frame sent\nI0308 15:06:13.045189 1183 log.go:172] (0xc000138580) (0xc0006e06e0) Stream removed, broadcasting: 1\nI0308 15:06:13.045202 1183 log.go:172] (0xc000138580) Go away received\nI0308 15:06:13.045479 1183 log.go:172] (0xc000138580) (0xc0006e06e0) Stream removed, broadcasting: 1\nI0308 15:06:13.045503 1183 log.go:172] (0xc000138580) (0xc0003843c0) Stream removed, broadcasting: 3\nI0308 15:06:13.045516 1183 log.go:172] (0xc000138580) (0xc00068cb40) Stream removed, broadcasting: 5\n" Mar 8 15:06:13.047: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Mar 8 15:06:13.047: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Mar 8 15:06:13.047: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 15:06:13.050: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 8 15:06:23.059: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 8 15:06:23.059: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 8 15:06:23.059: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 8 15:06:23.072: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999955s Mar 8 15:06:24.077: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994394756s Mar 8 15:06:25.081: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989265288s Mar 8 15:06:26.087: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985133232s Mar 8 15:06:27.092: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.979716806s Mar 8 15:06:28.097: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.974414892s Mar 8 15:06:29.102: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.969468877s Mar 8 15:06:30.108: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.964040106s Mar 8 15:06:31.113: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.95863094s Mar 8 15:06:32.118: INFO: Verifying statefulset ss doesn't scale past 3 for another 953.043952ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-lfjt6 Mar 8 15:06:33.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lfjt6 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 15:06:33.319: INFO: stderr: "I0308 15:06:33.233667 1205 log.go:172] (0xc00080e2c0) (0xc0006fc640) Create stream\nI0308 15:06:33.233701 1205 log.go:172] (0xc00080e2c0) (0xc0006fc640) Stream added, broadcasting: 1\nI0308 15:06:33.234847 1205 log.go:172] (0xc00080e2c0) Reply frame received for 1\nI0308 15:06:33.234867 1205 log.go:172] (0xc00080e2c0) (0xc0006fc6e0) Create stream\nI0308 15:06:33.234872 1205 log.go:172] (0xc00080e2c0) (0xc0006fc6e0) Stream added, broadcasting: 3\nI0308 15:06:33.235381 1205 log.go:172] (0xc00080e2c0) Reply frame received for 3\nI0308 15:06:33.235400 1205 log.go:172] (0xc00080e2c0) (0xc0007a8f00) Create stream\nI0308 15:06:33.235414 1205 log.go:172] (0xc00080e2c0) (0xc0007a8f00) Stream added, broadcasting: 5\nI0308 15:06:33.235854 1205 log.go:172] (0xc00080e2c0) Reply frame received for 5\nI0308 15:06:33.316203 1205 log.go:172] (0xc00080e2c0) Data frame received for 3\nI0308 15:06:33.316221 1205 log.go:172] (0xc0006fc6e0) (3) Data frame handling\nI0308 15:06:33.316230 1205 log.go:172] (0xc0006fc6e0) (3) Data frame sent\nI0308 15:06:33.316236 1205 log.go:172] (0xc00080e2c0) Data frame received for 3\nI0308 15:06:33.316241 1205 log.go:172] (0xc0006fc6e0) (3) Data frame handling\nI0308 15:06:33.316563 1205 log.go:172] (0xc00080e2c0) Data frame received for 5\nI0308 15:06:33.316594 1205 log.go:172] (0xc0007a8f00) (5) Data frame handling\nI0308 15:06:33.317990 1205 log.go:172] (0xc00080e2c0) Data frame received for 1\nI0308 15:06:33.318004 1205 log.go:172] (0xc0006fc640) (1) Data frame handling\nI0308 15:06:33.318012 1205 log.go:172] (0xc0006fc640) (1) Data frame sent\nI0308 15:06:33.318030 1205 log.go:172] (0xc00080e2c0) (0xc0006fc640) Stream removed, broadcasting: 1\nI0308 15:06:33.318046 1205 log.go:172] (0xc00080e2c0) Go away received\nI0308 15:06:33.318263 1205 log.go:172] (0xc00080e2c0) (0xc0006fc640) Stream removed, broadcasting: 1\nI0308 15:06:33.318277 1205 log.go:172] (0xc00080e2c0) (0xc0006fc6e0) Stream removed, broadcasting: 3\nI0308 15:06:33.318283 1205 log.go:172] (0xc00080e2c0) (0xc0007a8f00) Stream removed, broadcasting: 5\n" Mar 8 15:06:33.319: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 8 15:06:33.319: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 8 15:06:33.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lfjt6 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 15:06:33.493: INFO: stderr: "I0308 15:06:33.426985 1227 log.go:172] (0xc000138840) (0xc000890280) Create stream\nI0308 15:06:33.427022 1227 log.go:172] (0xc000138840) (0xc000890280) Stream added, broadcasting: 1\nI0308 15:06:33.429041 1227 log.go:172] (0xc000138840) Reply frame received for 1\nI0308 15:06:33.430278 1227 log.go:172] (0xc000138840) (0xc000890000) Create stream\nI0308 15:06:33.430297 1227 log.go:172] (0xc000138840) (0xc000890000) Stream added, broadcasting: 3\nI0308 15:06:33.431101 1227 log.go:172] (0xc000138840) Reply frame received for 3\nI0308 15:06:33.431119 1227 log.go:172] (0xc000138840) (0xc0008900a0) Create stream\nI0308 15:06:33.431123 1227 log.go:172] (0xc000138840) (0xc0008900a0) Stream added, broadcasting: 5\nI0308 15:06:33.431678 1227 log.go:172] (0xc000138840) Reply frame received for 5\nI0308 15:06:33.491025 1227 log.go:172] (0xc000138840) Data frame received for 5\nI0308 15:06:33.491047 1227 log.go:172] (0xc0008900a0) (5) Data frame handling\nI0308 15:06:33.491061 1227 log.go:172] (0xc000138840) Data frame received for 3\nI0308 15:06:33.491065 1227 log.go:172] (0xc000890000) (3) Data frame handling\nI0308 15:06:33.491071 1227 log.go:172] (0xc000890000) (3) Data frame sent\nI0308 15:06:33.491076 1227 log.go:172] (0xc000138840) Data frame received for 3\nI0308 15:06:33.491080 1227 log.go:172] (0xc000890000) (3) Data frame handling\nI0308 15:06:33.491900 1227 log.go:172] (0xc000138840) Data frame received for 1\nI0308 15:06:33.491934 1227 log.go:172] (0xc000890280) (1) Data frame handling\nI0308 15:06:33.491953 1227 log.go:172] (0xc000890280) (1) Data frame sent\nI0308 15:06:33.491980 1227 log.go:172] (0xc000138840) (0xc000890280) Stream removed, broadcasting: 1\nI0308 15:06:33.492010 1227 log.go:172] (0xc000138840) Go away received\nI0308 15:06:33.492156 1227 log.go:172] (0xc000138840) (0xc000890280) Stream removed, broadcasting: 1\nI0308 15:06:33.492172 1227 log.go:172] (0xc000138840) (0xc000890000) Stream removed, broadcasting: 3\nI0308 15:06:33.492178 1227 log.go:172] (0xc000138840) (0xc0008900a0) Stream removed, broadcasting: 5\n" Mar 8 15:06:33.493: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 8 15:06:33.493: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 8 15:06:33.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-lfjt6 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Mar 8 15:06:33.660: INFO: stderr: "I0308 15:06:33.611521 1249 log.go:172] (0xc000138160) (0xc0006d05a0) Create stream\nI0308 15:06:33.611560 1249 log.go:172] (0xc000138160) (0xc0006d05a0) Stream added, broadcasting: 1\nI0308 15:06:33.612772 1249 log.go:172] (0xc000138160) Reply frame received for 1\nI0308 15:06:33.612804 1249 log.go:172] (0xc000138160) (0xc000628be0) Create stream\nI0308 15:06:33.612812 1249 log.go:172] (0xc000138160) (0xc000628be0) Stream added, broadcasting: 3\nI0308 15:06:33.613398 1249 log.go:172] (0xc000138160) Reply frame received for 3\nI0308 15:06:33.613422 1249 log.go:172] (0xc000138160) (0xc000628d20) Create stream\nI0308 15:06:33.613435 1249 log.go:172] (0xc000138160) (0xc000628d20) Stream added, broadcasting: 5\nI0308 15:06:33.613999 1249 log.go:172] (0xc000138160) Reply frame received for 5\nI0308 15:06:33.658333 1249 log.go:172] (0xc000138160) Data frame received for 5\nI0308 15:06:33.658378 1249 log.go:172] (0xc000628d20) (5) Data frame handling\nI0308 15:06:33.658401 1249 log.go:172] (0xc000138160) Data frame received for 3\nI0308 15:06:33.658407 1249 log.go:172] (0xc000628be0) (3) Data frame handling\nI0308 15:06:33.658412 1249 log.go:172] (0xc000628be0) (3) Data frame sent\nI0308 15:06:33.658417 1249 log.go:172] (0xc000138160) Data frame received for 3\nI0308 15:06:33.658420 1249 log.go:172] (0xc000628be0) (3) Data frame handling\nI0308 15:06:33.659069 1249 log.go:172] (0xc000138160) Data frame received for 1\nI0308 15:06:33.659083 1249 log.go:172] (0xc0006d05a0) (1) Data frame handling\nI0308 15:06:33.659089 1249 log.go:172] (0xc0006d05a0) (1) Data frame sent\nI0308 15:06:33.659096 1249 log.go:172] (0xc000138160) (0xc0006d05a0) Stream removed, broadcasting: 1\nI0308 15:06:33.659114 1249 log.go:172] (0xc000138160) Go away received\nI0308 15:06:33.659186 1249 log.go:172] (0xc000138160) (0xc0006d05a0) Stream removed, broadcasting: 1\nI0308 15:06:33.659195 1249 log.go:172] (0xc000138160) (0xc000628be0) Stream removed, broadcasting: 3\nI0308 15:06:33.659199 1249 log.go:172] (0xc000138160) (0xc000628d20) Stream removed, broadcasting: 5\n" Mar 8 15:06:33.660: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Mar 8 15:06:33.660: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Mar 8 15:06:33.660: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 8 15:06:53.672: INFO: Deleting all statefulset in ns e2e-tests-statefulset-lfjt6 Mar 8 15:06:53.675: INFO: Scaling statefulset ss to 0 Mar 8 15:06:53.682: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 15:06:53.685: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:06:53.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-lfjt6" for this suite. Mar 8 15:06:59.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:06:59.760: INFO: namespace: e2e-tests-statefulset-lfjt6, resource: bindings, ignored listing per whitelist Mar 8 15:06:59.870: INFO: namespace e2e-tests-statefulset-lfjt6 deletion completed in 6.164729596s • [SLOW TEST:88.057 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:06:59.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Mar 8 15:06:59.979: INFO: Waiting up to 5m0s for pod "client-containers-75c92db9-614e-11ea-b38e-0242ac11000f" in namespace "e2e-tests-containers-f9skm" to be "success or failure" Mar 8 15:06:59.983: INFO: Pod "client-containers-75c92db9-614e-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.72305ms Mar 8 15:07:01.987: INFO: Pod "client-containers-75c92db9-614e-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008858364s Mar 8 15:07:04.054: INFO: Pod "client-containers-75c92db9-614e-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075122831s STEP: Saw pod success Mar 8 15:07:04.054: INFO: Pod "client-containers-75c92db9-614e-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:07:04.073: INFO: Trying to get logs from node hunter-worker pod client-containers-75c92db9-614e-11ea-b38e-0242ac11000f container test-container: STEP: delete the pod Mar 8 15:07:04.092: INFO: Waiting for pod client-containers-75c92db9-614e-11ea-b38e-0242ac11000f to disappear Mar 8 15:07:04.096: INFO: Pod client-containers-75c92db9-614e-11ea-b38e-0242ac11000f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:07:04.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-f9skm" for this suite. Mar 8 15:07:10.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:07:10.149: INFO: namespace: e2e-tests-containers-f9skm, resource: bindings, ignored listing per whitelist Mar 8 15:07:10.223: INFO: namespace e2e-tests-containers-f9skm deletion completed in 6.123801653s • [SLOW TEST:10.352 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:07:10.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-7bf82498-614e-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 8 15:07:10.373: INFO: Waiting up to 5m0s for pod "pod-configmaps-7bf8f6c6-614e-11ea-b38e-0242ac11000f" in namespace "e2e-tests-configmap-4mmm5" to be "success or failure" Mar 8 15:07:10.377: INFO: Pod "pod-configmaps-7bf8f6c6-614e-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.407015ms Mar 8 15:07:12.381: INFO: Pod "pod-configmaps-7bf8f6c6-614e-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008206401s STEP: Saw pod success Mar 8 15:07:12.381: INFO: Pod "pod-configmaps-7bf8f6c6-614e-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:07:12.384: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-7bf8f6c6-614e-11ea-b38e-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 8 15:07:12.412: INFO: Waiting for pod pod-configmaps-7bf8f6c6-614e-11ea-b38e-0242ac11000f to disappear Mar 8 15:07:12.418: INFO: Pod pod-configmaps-7bf8f6c6-614e-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:07:12.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4mmm5" for this suite. Mar 8 15:07:18.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:07:18.497: INFO: namespace: e2e-tests-configmap-4mmm5, resource: bindings, ignored listing per whitelist Mar 8 15:07:18.513: INFO: namespace e2e-tests-configmap-4mmm5 deletion completed in 6.092406789s • [SLOW TEST:8.291 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:07:18.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 8 15:07:18.625: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 6.730918ms) Mar 8 15:07:18.628: INFO: (1) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.46276ms) Mar 8 15:07:18.631: INFO: (2) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.235989ms) Mar 8 15:07:18.634: INFO: (3) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.73508ms) Mar 8 15:07:18.637: INFO: (4) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.046884ms) Mar 8 15:07:18.641: INFO: (5) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 3.238969ms) Mar 8 15:07:18.643: INFO: (6) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.806838ms) Mar 8 15:07:18.646: INFO: (7) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.989985ms) Mar 8 15:07:18.649: INFO: (8) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.50729ms) Mar 8 15:07:18.652: INFO: (9) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.719435ms) Mar 8 15:07:18.654: INFO: (10) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.41508ms) Mar 8 15:07:18.657: INFO: (11) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.53088ms) Mar 8 15:07:18.659: INFO: (12) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.693054ms) Mar 8 15:07:18.662: INFO: (13) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.888274ms) Mar 8 15:07:18.665: INFO: (14) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.781749ms) Mar 8 15:07:18.668: INFO: (15) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.5353ms) Mar 8 15:07:18.670: INFO: (16) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.670892ms) Mar 8 15:07:18.673: INFO: (17) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.625903ms) Mar 8 15:07:18.676: INFO: (18) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.900491ms) Mar 8 15:07:18.679: INFO: (19) /api/v1/nodes/hunter-worker/proxy/logs/:
containers/
pods/
(200; 2.831205ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:07:18.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-dt79d" for this suite. Mar 8 15:07:24.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:07:24.716: INFO: namespace: e2e-tests-proxy-dt79d, resource: bindings, ignored listing per whitelist Mar 8 15:07:24.787: INFO: namespace e2e-tests-proxy-dt79d deletion completed in 6.105256145s • [SLOW TEST:6.273 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:07:24.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0308 15:07:26.060562 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 15:07:26.060: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:07:26.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-d96n7" for this suite. Mar 8 15:07:32.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:07:32.170: INFO: namespace: e2e-tests-gc-d96n7, resource: bindings, ignored listing per whitelist Mar 8 15:07:32.176: INFO: namespace e2e-tests-gc-d96n7 deletion completed in 6.113681897s • [SLOW TEST:7.389 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:07:32.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Mar 8 15:07:32.254: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:07:32.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xk6cm" for this suite. Mar 8 15:07:38.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:07:38.381: INFO: namespace: e2e-tests-kubectl-xk6cm, resource: bindings, ignored listing per whitelist Mar 8 15:07:38.443: INFO: namespace e2e-tests-kubectl-xk6cm deletion completed in 6.097427665s • [SLOW TEST:6.266 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:07:38.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Mar 8 15:07:39.048: INFO: Waiting up to 5m0s for pod "pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-cjvbc" in namespace "e2e-tests-svcaccounts-r8ww4" to be "success or failure" Mar 8 15:07:39.083: INFO: Pod "pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-cjvbc": Phase="Pending", Reason="", readiness=false. Elapsed: 34.920918ms Mar 8 15:07:41.087: INFO: Pod "pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-cjvbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038961692s Mar 8 15:07:43.091: INFO: Pod "pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-cjvbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043154328s STEP: Saw pod success Mar 8 15:07:43.091: INFO: Pod "pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-cjvbc" satisfied condition "success or failure" Mar 8 15:07:43.094: INFO: Trying to get logs from node hunter-worker pod pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-cjvbc container token-test: STEP: delete the pod Mar 8 15:07:43.131: INFO: Waiting for pod pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-cjvbc to disappear Mar 8 15:07:43.138: INFO: Pod pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-cjvbc no longer exists STEP: Creating a pod to test consume service account root CA Mar 8 15:07:43.142: INFO: Waiting up to 5m0s for pod "pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-7sqlr" in namespace "e2e-tests-svcaccounts-r8ww4" to be "success or failure" Mar 8 15:07:43.160: INFO: Pod "pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-7sqlr": Phase="Pending", Reason="", readiness=false. Elapsed: 17.426117ms Mar 8 15:07:45.164: INFO: Pod "pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-7sqlr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02186914s Mar 8 15:07:47.169: INFO: Pod "pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-7sqlr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026728508s STEP: Saw pod success Mar 8 15:07:47.169: INFO: Pod "pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-7sqlr" satisfied condition "success or failure" Mar 8 15:07:47.172: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-7sqlr container root-ca-test: STEP: delete the pod Mar 8 15:07:47.193: INFO: Waiting for pod pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-7sqlr to disappear Mar 8 15:07:47.198: INFO: Pod pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-7sqlr no longer exists STEP: Creating a pod to test consume service account namespace Mar 8 15:07:47.202: INFO: Waiting up to 5m0s for pod "pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-tbbt2" in namespace "e2e-tests-svcaccounts-r8ww4" to be "success or failure" Mar 8 15:07:47.217: INFO: Pod "pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-tbbt2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.759812ms Mar 8 15:07:49.221: INFO: Pod "pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-tbbt2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019726175s Mar 8 15:07:51.226: INFO: Pod "pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-tbbt2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024250641s STEP: Saw pod success Mar 8 15:07:51.226: INFO: Pod "pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-tbbt2" satisfied condition "success or failure" Mar 8 15:07:51.229: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-tbbt2 container namespace-test: STEP: delete the pod Mar 8 15:07:51.260: INFO: Waiting for pod pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-tbbt2 to disappear Mar 8 15:07:51.275: INFO: Pod pod-service-account-8d1327fb-614e-11ea-b38e-0242ac11000f-tbbt2 no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:07:51.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-r8ww4" for this suite. Mar 8 15:07:57.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:07:57.360: INFO: namespace: e2e-tests-svcaccounts-r8ww4, resource: bindings, ignored listing per whitelist Mar 8 15:07:57.374: INFO: namespace e2e-tests-svcaccounts-r8ww4 deletion completed in 6.095769063s • [SLOW TEST:18.932 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:07:57.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 8 15:07:57.474: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 8 15:08:02.479: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:08:03.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-58zz5" for this suite. Mar 8 15:08:09.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:08:09.553: INFO: namespace: e2e-tests-replication-controller-58zz5, resource: bindings, ignored listing per whitelist Mar 8 15:08:09.604: INFO: namespace e2e-tests-replication-controller-58zz5 deletion completed in 6.104601768s • [SLOW TEST:12.229 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:08:09.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-bfvbr Mar 8 15:08:11.742: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-bfvbr STEP: checking the pod's current state and verifying that restartCount is present Mar 8 15:08:11.745: INFO: Initial restart count of pod liveness-exec is 0 Mar 8 15:08:59.850: INFO: Restart count of pod e2e-tests-container-probe-bfvbr/liveness-exec is now 1 (48.105098037s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:08:59.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-bfvbr" for this suite. Mar 8 15:09:05.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:09:05.930: INFO: namespace: e2e-tests-container-probe-bfvbr, resource: bindings, ignored listing per whitelist Mar 8 15:09:05.990: INFO: namespace e2e-tests-container-probe-bfvbr deletion completed in 6.09518552s • [SLOW TEST:56.386 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:09:05.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 8 15:09:06.071: INFO: Creating deployment "nginx-deployment" Mar 8 15:09:06.074: INFO: Waiting for observed generation 1 Mar 8 15:09:08.168: INFO: Waiting for all required pods to come up Mar 8 15:09:08.213: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 8 15:09:10.223: INFO: Waiting for deployment "nginx-deployment" to complete Mar 8 15:09:10.270: INFO: Updating deployment "nginx-deployment" with a non-existent image Mar 8 15:09:10.276: INFO: Updating deployment nginx-deployment Mar 8 15:09:10.276: INFO: Waiting for observed generation 2 Mar 8 15:09:12.281: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 8 15:09:12.283: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 8 15:09:12.285: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 8 15:09:12.292: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 8 15:09:12.292: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 8 15:09:12.294: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Mar 8 15:09:12.297: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Mar 8 15:09:12.297: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Mar 8 15:09:12.302: INFO: Updating deployment nginx-deployment Mar 8 15:09:12.302: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Mar 8 15:09:12.321: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 8 15:09:12.345: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 8 15:09:12.540: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-b8v6f/deployments/nginx-deployment,UID:c0f31d8e-614e-11ea-9978-0242ac11000d,ResourceVersion:7295,Generation:3,CreationTimestamp:2020-03-08 15:09:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-03-08 15:09:10 +0000 UTC 2020-03-08 15:09:06 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-03-08 15:09:12 +0000 UTC 2020-03-08 15:09:12 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Mar 8 15:09:12.604: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-b8v6f/replicasets/nginx-deployment-5c98f8fb5,UID:c374c704-614e-11ea-9978-0242ac11000d,ResourceVersion:7309,Generation:3,CreationTimestamp:2020-03-08 15:09:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment c0f31d8e-614e-11ea-9978-0242ac11000d 0xc002554207 0xc002554208}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 8 15:09:12.604: INFO: All old ReplicaSets of Deployment "nginx-deployment": Mar 8 15:09:12.604: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-b8v6f/replicasets/nginx-deployment-85ddf47c5d,UID:c0f89d7a-614e-11ea-9978-0242ac11000d,ResourceVersion:7297,Generation:3,CreationTimestamp:2020-03-08 15:09:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment c0f31d8e-614e-11ea-9978-0242ac11000d 0xc002554347 0xc002554348}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Mar 8 15:09:12.698: INFO: Pod "nginx-deployment-5c98f8fb5-6nplq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6nplq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-5c98f8fb5-6nplq,UID:c4bf010a-614e-11ea-9978-0242ac11000d,ResourceVersion:7307,Generation:0,CreationTimestamp:2020-03-08 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c374c704-614e-11ea-9978-0242ac11000d 0xc0023e0140 0xc0023e0141}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e01c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e01e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.698: INFO: Pod "nginx-deployment-5c98f8fb5-77dhr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-77dhr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-5c98f8fb5-77dhr,UID:c4b9b5d8-614e-11ea-9978-0242ac11000d,ResourceVersion:7300,Generation:0,CreationTimestamp:2020-03-08 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c374c704-614e-11ea-9978-0242ac11000d 0xc0023e0250 0xc0023e0251}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e0340} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e0360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.698: INFO: Pod "nginx-deployment-5c98f8fb5-8jhfc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8jhfc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-5c98f8fb5-8jhfc,UID:c4b986ef-614e-11ea-9978-0242ac11000d,ResourceVersion:7299,Generation:0,CreationTimestamp:2020-03-08 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c374c704-614e-11ea-9978-0242ac11000d 0xc0023e03d0 0xc0023e03d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e04d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e04f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.698: INFO: Pod "nginx-deployment-5c98f8fb5-8qxnz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8qxnz,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-5c98f8fb5-8qxnz,UID:c4acf01e-614e-11ea-9978-0242ac11000d,ResourceVersion:7270,Generation:0,CreationTimestamp:2020-03-08 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c374c704-614e-11ea-9978-0242ac11000d 0xc0023e0560 0xc0023e0561}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e05e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e0600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.699: INFO: Pod "nginx-deployment-5c98f8fb5-98b72" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-98b72,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-5c98f8fb5-98b72,UID:c4b96ee2-614e-11ea-9978-0242ac11000d,ResourceVersion:7301,Generation:0,CreationTimestamp:2020-03-08 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c374c704-614e-11ea-9978-0242ac11000d 0xc0023e0670 0xc0023e0671}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e06f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e0710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.699: INFO: Pod "nginx-deployment-5c98f8fb5-fjfvx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fjfvx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-5c98f8fb5-fjfvx,UID:c3908cd4-614e-11ea-9978-0242ac11000d,ResourceVersion:7251,Generation:0,CreationTimestamp:2020-03-08 15:09:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c374c704-614e-11ea-9978-0242ac11000d 0xc0023e0780 0xc0023e0781}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e0800} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e0820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.11,PodIP:,StartTime:2020-03-08 15:09:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.699: INFO: Pod "nginx-deployment-5c98f8fb5-fmq4w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fmq4w,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-5c98f8fb5-fmq4w,UID:c37553e1-614e-11ea-9978-0242ac11000d,ResourceVersion:7248,Generation:0,CreationTimestamp:2020-03-08 15:09:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c374c704-614e-11ea-9978-0242ac11000d 0xc0023e08f0 0xc0023e08f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e0970} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e0990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-08 15:09:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.699: INFO: Pod "nginx-deployment-5c98f8fb5-gksg9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gksg9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-5c98f8fb5-gksg9,UID:c4b9ab42-614e-11ea-9978-0242ac11000d,ResourceVersion:7305,Generation:0,CreationTimestamp:2020-03-08 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c374c704-614e-11ea-9978-0242ac11000d 0xc0023e0a50 0xc0023e0a51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e0ad0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e0af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.699: INFO: Pod "nginx-deployment-5c98f8fb5-k6cf4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-k6cf4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-5c98f8fb5-k6cf4,UID:c4b05574-614e-11ea-9978-0242ac11000d,ResourceVersion:7278,Generation:0,CreationTimestamp:2020-03-08 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c374c704-614e-11ea-9978-0242ac11000d 0xc0023e0b60 0xc0023e0b61}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e0bf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e0c50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.699: INFO: Pod "nginx-deployment-5c98f8fb5-nccck" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nccck,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-5c98f8fb5-nccck,UID:c3776bf0-614e-11ea-9978-0242ac11000d,ResourceVersion:7254,Generation:0,CreationTimestamp:2020-03-08 15:09:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c374c704-614e-11ea-9978-0242ac11000d 0xc0023e0cc0 0xc0023e0cc1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e0d40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e0d60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-08 15:09:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.699: INFO: Pod "nginx-deployment-5c98f8fb5-qxvqk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qxvqk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-5c98f8fb5-qxvqk,UID:c3776fcb-614e-11ea-9978-0242ac11000d,ResourceVersion:7249,Generation:0,CreationTimestamp:2020-03-08 15:09:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c374c704-614e-11ea-9978-0242ac11000d 0xc0023e0e60 0xc0023e0e61}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e0ee0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e0f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.11,PodIP:,StartTime:2020-03-08 15:09:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.700: INFO: Pod "nginx-deployment-5c98f8fb5-s76h5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-s76h5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-5c98f8fb5-s76h5,UID:c394544e-614e-11ea-9978-0242ac11000d,ResourceVersion:7315,Generation:0,CreationTimestamp:2020-03-08 15:09:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c374c704-614e-11ea-9978-0242ac11000d 0xc0023e0ff0 0xc0023e0ff1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e10f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e1110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-08 15:09:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.700: INFO: Pod "nginx-deployment-5c98f8fb5-t6ss4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-t6ss4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-5c98f8fb5-t6ss4,UID:c4b0a529-614e-11ea-9978-0242ac11000d,ResourceVersion:7283,Generation:0,CreationTimestamp:2020-03-08 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 c374c704-614e-11ea-9978-0242ac11000d 0xc0023e11d0 0xc0023e11d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e1250} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e1300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.700: INFO: Pod "nginx-deployment-85ddf47c5d-4rxlq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4rxlq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-85ddf47c5d-4rxlq,UID:c0fe184b-614e-11ea-9978-0242ac11000d,ResourceVersion:7159,Generation:0,CreationTimestamp:2020-03-08 15:09:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0f89d7a-614e-11ea-9978-0242ac11000d 0xc0023e1370 0xc0023e1371}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e13e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e1400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:06 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.32,StartTime:2020-03-08 15:09:06 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-08 15:09:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://df51ca100d7023714e677f2c4e60dd8a0c15f08325278f1a80def3987718de59}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.700: INFO: Pod "nginx-deployment-85ddf47c5d-5smdh" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5smdh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-85ddf47c5d-5smdh,UID:c0fef32b-614e-11ea-9978-0242ac11000d,ResourceVersion:7192,Generation:0,CreationTimestamp:2020-03-08 15:09:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0f89d7a-614e-11ea-9978-0242ac11000d 0xc0023e1570 0xc0023e1571}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e15e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e1600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:06 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.35,StartTime:2020-03-08 15:09:06 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-08 15:09:09 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://78e27da91a78b9ee3951db660bcb2cff3995ba57716957bde05973eeeebbd548}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.700: INFO: Pod "nginx-deployment-85ddf47c5d-665s5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-665s5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-85ddf47c5d-665s5,UID:c4b9b97b-614e-11ea-9978-0242ac11000d,ResourceVersion:7304,Generation:0,CreationTimestamp:2020-03-08 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0f89d7a-614e-11ea-9978-0242ac11000d 0xc0023e1800 0xc0023e1801}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e1870} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e1890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.700: INFO: Pod "nginx-deployment-85ddf47c5d-7zznl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7zznl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-85ddf47c5d-7zznl,UID:c4b9ac4c-614e-11ea-9978-0242ac11000d,ResourceVersion:7303,Generation:0,CreationTimestamp:2020-03-08 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0f89d7a-614e-11ea-9978-0242ac11000d 0xc0023e1a50 0xc0023e1a51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e1ad0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e1af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.700: INFO: Pod "nginx-deployment-85ddf47c5d-9dxvw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9dxvw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-85ddf47c5d-9dxvw,UID:c4acf3f9-614e-11ea-9978-0242ac11000d,ResourceVersion:7271,Generation:0,CreationTimestamp:2020-03-08 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0f89d7a-614e-11ea-9978-0242ac11000d 0xc0023e1c50 0xc0023e1c51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e1ce0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e1d00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.701: INFO: Pod "nginx-deployment-85ddf47c5d-9rnf6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9rnf6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-85ddf47c5d-9rnf6,UID:c1017293-614e-11ea-9978-0242ac11000d,ResourceVersion:7179,Generation:0,CreationTimestamp:2020-03-08 15:09:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0f89d7a-614e-11ea-9978-0242ac11000d 0xc0023e1d90 0xc0023e1d91}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023e1e10} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023e1e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:06 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.11,PodIP:10.244.2.37,StartTime:2020-03-08 15:09:06 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-08 15:09:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3ff1f0994eb7344f1cc8bc6e8a9710494c58074b1f046bd410dc311b7d6a9c90}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.701: INFO: Pod "nginx-deployment-85ddf47c5d-bm2c7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bm2c7,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-85ddf47c5d-bm2c7,UID:c0fe14f4-614e-11ea-9978-0242ac11000d,ResourceVersion:7188,Generation:0,CreationTimestamp:2020-03-08 15:09:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0f89d7a-614e-11ea-9978-0242ac11000d 0xc0023e1f70 0xc0023e1f71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023be020} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023be040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:06 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.33,StartTime:2020-03-08 15:09:06 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-08 15:09:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://cd7386e246ccd6b543926624f8b090a31406a73a4a4afaaf2d1cafa88a8f59e7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.701: INFO: Pod "nginx-deployment-85ddf47c5d-bnzxn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bnzxn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-85ddf47c5d-bnzxn,UID:c4b99be1-614e-11ea-9978-0242ac11000d,ResourceVersion:7302,Generation:0,CreationTimestamp:2020-03-08 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0f89d7a-614e-11ea-9978-0242ac11000d 0xc0023be110 0xc0023be111}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023be180} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023be1a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.701: INFO: Pod "nginx-deployment-85ddf47c5d-gznfq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gznfq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-85ddf47c5d-gznfq,UID:c0fd8896-614e-11ea-9978-0242ac11000d,ResourceVersion:7152,Generation:0,CreationTimestamp:2020-03-08 15:09:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0f89d7a-614e-11ea-9978-0242ac11000d 0xc0023be220 0xc0023be221}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023be290} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023be2b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:06 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.11,PodIP:10.244.2.35,StartTime:2020-03-08 15:09:06 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-08 15:09:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://676b9c373d05607d24be540c189d8984b28d60a71344e1f03a1ea293bbce64bb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.701: INFO: Pod "nginx-deployment-85ddf47c5d-h5j6t" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-h5j6t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-85ddf47c5d-h5j6t,UID:c0fee1ef-614e-11ea-9978-0242ac11000d,ResourceVersion:7157,Generation:0,CreationTimestamp:2020-03-08 15:09:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0f89d7a-614e-11ea-9978-0242ac11000d 0xc0023be370 0xc0023be371}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023be3e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023be400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:06 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.11,PodIP:10.244.2.36,StartTime:2020-03-08 15:09:06 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-08 15:09:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://537af17fd2359e9a0c8067c7cd9b474927560da35b32aa35d7c3793ad921e5d5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.701: INFO: Pod "nginx-deployment-85ddf47c5d-jlnm5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jlnm5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-85ddf47c5d-jlnm5,UID:c4b0b19c-614e-11ea-9978-0242ac11000d,ResourceVersion:7287,Generation:0,CreationTimestamp:2020-03-08 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0f89d7a-614e-11ea-9978-0242ac11000d 0xc0023be4c0 0xc0023be4c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023be530} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023be550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.701: INFO: Pod "nginx-deployment-85ddf47c5d-lrjxx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-lrjxx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-85ddf47c5d-lrjxx,UID:c4b0838f-614e-11ea-9978-0242ac11000d,ResourceVersion:7280,Generation:0,CreationTimestamp:2020-03-08 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0f89d7a-614e-11ea-9978-0242ac11000d 0xc0023be5c0 0xc0023be5c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023be630} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023be650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.702: INFO: Pod "nginx-deployment-85ddf47c5d-m7hlf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-m7hlf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-85ddf47c5d-m7hlf,UID:c4b0ac60-614e-11ea-9978-0242ac11000d,ResourceVersion:7290,Generation:0,CreationTimestamp:2020-03-08 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0f89d7a-614e-11ea-9978-0242ac11000d 0xc0023be6c0 0xc0023be6c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023be730} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023be750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.702: INFO: Pod "nginx-deployment-85ddf47c5d-m7wxq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-m7wxq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-85ddf47c5d-m7wxq,UID:c1015fee-614e-11ea-9978-0242ac11000d,ResourceVersion:7182,Generation:0,CreationTimestamp:2020-03-08 15:09:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0f89d7a-614e-11ea-9978-0242ac11000d 0xc0023be7c0 0xc0023be7c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023be830} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023be850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:06 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.11,PodIP:10.244.2.38,StartTime:2020-03-08 15:09:06 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-08 15:09:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://cda5fcfd534b15e7481f4bab54e1479390bbe7fa221a09b638027034d411f569}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.702: INFO: Pod "nginx-deployment-85ddf47c5d-plzl2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-plzl2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-85ddf47c5d-plzl2,UID:c4b98c06-614e-11ea-9978-0242ac11000d,ResourceVersion:7296,Generation:0,CreationTimestamp:2020-03-08 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0f89d7a-614e-11ea-9978-0242ac11000d 0xc0023be910 0xc0023be911}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023be980} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023be9a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.702: INFO: Pod "nginx-deployment-85ddf47c5d-twm5j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-twm5j,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-85ddf47c5d-twm5j,UID:c4abe463-614e-11ea-9978-0242ac11000d,ResourceVersion:7266,Generation:0,CreationTimestamp:2020-03-08 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0f89d7a-614e-11ea-9978-0242ac11000d 0xc0023bea10 0xc0023bea11}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023bea80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023beaa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.702: INFO: Pod "nginx-deployment-85ddf47c5d-txrph" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-txrph,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-85ddf47c5d-txrph,UID:c4b9690e-614e-11ea-9978-0242ac11000d,ResourceVersion:7298,Generation:0,CreationTimestamp:2020-03-08 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0f89d7a-614e-11ea-9978-0242ac11000d 0xc0023beb10 0xc0023beb11}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023beb80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023bec20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.702: INFO: Pod "nginx-deployment-85ddf47c5d-wdtd8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wdtd8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-85ddf47c5d-wdtd8,UID:c4ad0874-614e-11ea-9978-0242ac11000d,ResourceVersion:7272,Generation:0,CreationTimestamp:2020-03-08 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0f89d7a-614e-11ea-9978-0242ac11000d 0xc0023bec90 0xc0023bec91}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023bed00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023bed20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.702: INFO: Pod "nginx-deployment-85ddf47c5d-xq9qc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-xq9qc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-85ddf47c5d-xq9qc,UID:c4b0b2c0-614e-11ea-9978-0242ac11000d,ResourceVersion:7289,Generation:0,CreationTimestamp:2020-03-08 15:09:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0f89d7a-614e-11ea-9978-0242ac11000d 0xc0023bed90 0xc0023bed91}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023bee00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023bee20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:12 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Mar 8 15:09:12.702: INFO: Pod "nginx-deployment-85ddf47c5d-z6z5z" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-z6z5z,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-b8v6f,SelfLink:/api/v1/namespaces/e2e-tests-deployment-b8v6f/pods/nginx-deployment-85ddf47c5d-z6z5z,UID:c1016802-614e-11ea-9978-0242ac11000d,ResourceVersion:7166,Generation:0,CreationTimestamp:2020-03-08 15:09:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d c0f89d7a-614e-11ea-9978-0242ac11000d 0xc0023bee90 0xc0023bee91}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5s5k2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5s5k2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5s5k2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0023bef00} {node.kubernetes.io/unreachable Exists NoExecute 0xc0023bef20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:06 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:08 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:09:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.34,StartTime:2020-03-08 15:09:06 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-08 15:09:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f88c66355742dc22d3c8c90c033cb131b2786b2b4abe7989b833028017e9af1d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:09:12.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-b8v6f" for this suite. Mar 8 15:09:22.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:09:22.978: INFO: namespace: e2e-tests-deployment-b8v6f, resource: bindings, ignored listing per whitelist Mar 8 15:09:23.016: INFO: namespace e2e-tests-deployment-b8v6f deletion completed in 10.171029855s • [SLOW TEST:17.026 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:09:23.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Mar 8 15:09:23.101: INFO: Waiting up to 5m0s for pod "pod-cb1893ab-614e-11ea-b38e-0242ac11000f" in namespace "e2e-tests-emptydir-hvrlt" to be "success or failure" Mar 8 15:09:23.116: INFO: Pod "pod-cb1893ab-614e-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.759195ms Mar 8 15:09:25.119: INFO: Pod "pod-cb1893ab-614e-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018096074s Mar 8 15:09:27.144: INFO: Pod "pod-cb1893ab-614e-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043304975s Mar 8 15:09:29.154: INFO: Pod "pod-cb1893ab-614e-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053188356s Mar 8 15:09:31.157: INFO: Pod "pod-cb1893ab-614e-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055607399s STEP: Saw pod success Mar 8 15:09:31.157: INFO: Pod "pod-cb1893ab-614e-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:09:31.159: INFO: Trying to get logs from node hunter-worker2 pod pod-cb1893ab-614e-11ea-b38e-0242ac11000f container test-container: STEP: delete the pod Mar 8 15:09:31.204: INFO: Waiting for pod pod-cb1893ab-614e-11ea-b38e-0242ac11000f to disappear Mar 8 15:09:31.206: INFO: Pod pod-cb1893ab-614e-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:09:31.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-hvrlt" for this suite. Mar 8 15:09:37.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:09:37.246: INFO: namespace: e2e-tests-emptydir-hvrlt, resource: bindings, ignored listing per whitelist Mar 8 15:09:37.267: INFO: namespace e2e-tests-emptydir-hvrlt deletion completed in 6.059088747s • [SLOW TEST:14.250 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:09:37.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-d3936924-614e-11ea-b38e-0242ac11000f STEP: Creating the pod STEP: Updating configmap configmap-test-upd-d3936924-614e-11ea-b38e-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:09:41.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-v8smd" for this suite. Mar 8 15:10:03.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:10:03.425: INFO: namespace: e2e-tests-configmap-v8smd, resource: bindings, ignored listing per whitelist Mar 8 15:10:03.480: INFO: namespace e2e-tests-configmap-v8smd deletion completed in 22.112449155s • [SLOW TEST:26.213 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:10:03.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 8 15:10:03.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-r8pc8' Mar 8 15:10:05.437: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 15:10:05.437: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Mar 8 15:10:05.444: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 8 15:10:05.535: INFO: scanned /root for discovery docs: Mar 8 15:10:05.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-r8pc8' Mar 8 15:10:21.762: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 8 15:10:21.762: INFO: stdout: "Created e2e-test-nginx-rc-756fe7fbc506a2853b2da66d240fc3c2\nScaling up e2e-test-nginx-rc-756fe7fbc506a2853b2da66d240fc3c2 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-756fe7fbc506a2853b2da66d240fc3c2 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-756fe7fbc506a2853b2da66d240fc3c2 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Mar 8 15:10:21.762: INFO: stdout: "Created e2e-test-nginx-rc-756fe7fbc506a2853b2da66d240fc3c2\nScaling up e2e-test-nginx-rc-756fe7fbc506a2853b2da66d240fc3c2 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-756fe7fbc506a2853b2da66d240fc3c2 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-756fe7fbc506a2853b2da66d240fc3c2 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Mar 8 15:10:21.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-r8pc8' Mar 8 15:10:21.874: INFO: stderr: "" Mar 8 15:10:21.874: INFO: stdout: "e2e-test-nginx-rc-756fe7fbc506a2853b2da66d240fc3c2-fts7g " Mar 8 15:10:21.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-756fe7fbc506a2853b2da66d240fc3c2-fts7g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r8pc8' Mar 8 15:10:21.956: INFO: stderr: "" Mar 8 15:10:21.956: INFO: stdout: "true" Mar 8 15:10:21.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-756fe7fbc506a2853b2da66d240fc3c2-fts7g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-r8pc8' Mar 8 15:10:22.025: INFO: stderr: "" Mar 8 15:10:22.025: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Mar 8 15:10:22.025: INFO: e2e-test-nginx-rc-756fe7fbc506a2853b2da66d240fc3c2-fts7g is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Mar 8 15:10:22.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-r8pc8' Mar 8 15:10:22.095: INFO: stderr: "" Mar 8 15:10:22.095: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:10:22.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-r8pc8" for this suite. Mar 8 15:10:44.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:10:44.191: INFO: namespace: e2e-tests-kubectl-r8pc8, resource: bindings, ignored listing per whitelist Mar 8 15:10:44.221: INFO: namespace e2e-tests-kubectl-r8pc8 deletion completed in 22.124070043s • [SLOW TEST:40.741 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:10:44.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 8 15:10:44.312: INFO: Waiting up to 5m0s for pod "pod-fb8019c2-614e-11ea-b38e-0242ac11000f" in namespace "e2e-tests-emptydir-mjtn9" to be "success or failure" Mar 8 15:10:44.323: INFO: Pod "pod-fb8019c2-614e-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.344467ms Mar 8 15:10:46.327: INFO: Pod "pod-fb8019c2-614e-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014380232s STEP: Saw pod success Mar 8 15:10:46.327: INFO: Pod "pod-fb8019c2-614e-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:10:46.329: INFO: Trying to get logs from node hunter-worker pod pod-fb8019c2-614e-11ea-b38e-0242ac11000f container test-container: STEP: delete the pod Mar 8 15:10:46.348: INFO: Waiting for pod pod-fb8019c2-614e-11ea-b38e-0242ac11000f to disappear Mar 8 15:10:46.352: INFO: Pod pod-fb8019c2-614e-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:10:46.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mjtn9" for this suite. Mar 8 15:10:52.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:10:52.453: INFO: namespace: e2e-tests-emptydir-mjtn9, resource: bindings, ignored listing per whitelist Mar 8 15:10:52.468: INFO: namespace e2e-tests-emptydir-mjtn9 deletion completed in 6.112515717s • [SLOW TEST:8.246 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:10:52.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-006b6db0-614f-11ea-b38e-0242ac11000f STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-006b6db0-614f-11ea-b38e-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:12:25.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-5zhz7" for this suite. Mar 8 15:12:39.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:12:39.655: INFO: namespace: e2e-tests-projected-5zhz7, resource: bindings, ignored listing per whitelist Mar 8 15:12:39.666: INFO: namespace e2e-tests-projected-5zhz7 deletion completed in 14.113018883s • [SLOW TEST:107.197 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:12:39.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 8 15:12:39.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-rdpjl' Mar 8 15:12:39.901: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 15:12:39.901: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Mar 8 15:12:39.935: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-cd75t] Mar 8 15:12:39.935: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-cd75t" in namespace "e2e-tests-kubectl-rdpjl" to be "running and ready" Mar 8 15:12:39.942: INFO: Pod "e2e-test-nginx-rc-cd75t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.214407ms Mar 8 15:12:42.039: INFO: Pod "e2e-test-nginx-rc-cd75t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103871216s Mar 8 15:12:44.044: INFO: Pod "e2e-test-nginx-rc-cd75t": Phase="Running", Reason="", readiness=true. Elapsed: 4.108261453s Mar 8 15:12:44.044: INFO: Pod "e2e-test-nginx-rc-cd75t" satisfied condition "running and ready" Mar 8 15:12:44.044: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-cd75t] Mar 8 15:12:44.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-rdpjl' Mar 8 15:12:44.282: INFO: stderr: "" Mar 8 15:12:44.282: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Mar 8 15:12:44.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-rdpjl' Mar 8 15:12:44.417: INFO: stderr: "" Mar 8 15:12:44.417: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:12:44.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rdpjl" for this suite. Mar 8 15:12:50.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:12:50.453: INFO: namespace: e2e-tests-kubectl-rdpjl, resource: bindings, ignored listing per whitelist Mar 8 15:12:50.492: INFO: namespace e2e-tests-kubectl-rdpjl deletion completed in 6.071991132s • [SLOW TEST:10.826 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:12:50.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 8 15:12:53.158: INFO: Successfully updated pod "pod-update-46c22532-614f-11ea-b38e-0242ac11000f" STEP: verifying the updated pod is in kubernetes Mar 8 15:12:53.180: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:12:53.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-6vb4z" for this suite. Mar 8 15:13:15.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:13:15.288: INFO: namespace: e2e-tests-pods-6vb4z, resource: bindings, ignored listing per whitelist Mar 8 15:13:15.320: INFO: namespace e2e-tests-pods-6vb4z deletion completed in 22.133992132s • [SLOW TEST:24.828 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:13:15.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-558e6fe0-614f-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 8 15:13:15.404: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-558f06e2-614f-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-dp49d" to be "success or failure" Mar 8 15:13:15.409: INFO: Pod "pod-projected-configmaps-558f06e2-614f-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.997056ms Mar 8 15:13:17.411: INFO: Pod "pod-projected-configmaps-558f06e2-614f-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0074391s STEP: Saw pod success Mar 8 15:13:17.411: INFO: Pod "pod-projected-configmaps-558f06e2-614f-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:13:17.413: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-558f06e2-614f-11ea-b38e-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 8 15:13:17.471: INFO: Waiting for pod pod-projected-configmaps-558f06e2-614f-11ea-b38e-0242ac11000f to disappear Mar 8 15:13:17.481: INFO: Pod pod-projected-configmaps-558f06e2-614f-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:13:17.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-dp49d" for this suite. Mar 8 15:13:23.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:13:23.501: INFO: namespace: e2e-tests-projected-dp49d, resource: bindings, ignored listing per whitelist Mar 8 15:13:23.543: INFO: namespace e2e-tests-projected-dp49d deletion completed in 6.05988569s • [SLOW TEST:8.223 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:13:23.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:13:29.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-52mnt" for this suite. Mar 8 15:13:35.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:13:36.049: INFO: namespace: e2e-tests-namespaces-52mnt, resource: bindings, ignored listing per whitelist Mar 8 15:13:36.053: INFO: namespace e2e-tests-namespaces-52mnt deletion completed in 6.112221775s STEP: Destroying namespace "e2e-tests-nsdeletetest-rwzm2" for this suite. Mar 8 15:13:36.055: INFO: Namespace e2e-tests-nsdeletetest-rwzm2 was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-c774k" for this suite. Mar 8 15:13:42.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:13:42.100: INFO: namespace: e2e-tests-nsdeletetest-c774k, resource: bindings, ignored listing per whitelist Mar 8 15:13:42.116: INFO: namespace e2e-tests-nsdeletetest-c774k deletion completed in 6.061213191s • [SLOW TEST:18.573 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:13:42.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 8 15:13:42.213: INFO: Waiting up to 5m0s for pod "pod-6589842a-614f-11ea-b38e-0242ac11000f" in namespace "e2e-tests-emptydir-f8x62" to be "success or failure" Mar 8 15:13:42.217: INFO: Pod "pod-6589842a-614f-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.468563ms Mar 8 15:13:44.220: INFO: Pod "pod-6589842a-614f-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00761581s STEP: Saw pod success Mar 8 15:13:44.220: INFO: Pod "pod-6589842a-614f-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:13:44.223: INFO: Trying to get logs from node hunter-worker pod pod-6589842a-614f-11ea-b38e-0242ac11000f container test-container: STEP: delete the pod Mar 8 15:13:44.237: INFO: Waiting for pod pod-6589842a-614f-11ea-b38e-0242ac11000f to disappear Mar 8 15:13:44.242: INFO: Pod pod-6589842a-614f-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:13:44.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-f8x62" for this suite. Mar 8 15:13:50.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:13:50.274: INFO: namespace: e2e-tests-emptydir-f8x62, resource: bindings, ignored listing per whitelist Mar 8 15:13:50.328: INFO: namespace e2e-tests-emptydir-f8x62 deletion completed in 6.083109088s • [SLOW TEST:8.211 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:13:50.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-4ppz8 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-4ppz8 to expose endpoints map[] Mar 8 15:13:50.456: INFO: Get endpoints failed (17.827125ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 8 15:13:51.459: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-4ppz8 exposes endpoints map[] (1.021199717s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-4ppz8 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-4ppz8 to expose endpoints map[pod1:[100]] Mar 8 15:13:53.535: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-4ppz8 exposes endpoints map[pod1:[100]] (2.069565152s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-4ppz8 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-4ppz8 to expose endpoints map[pod1:[100] pod2:[101]] Mar 8 15:13:55.646: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-4ppz8 exposes endpoints map[pod1:[100] pod2:[101]] (2.108051161s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-4ppz8 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-4ppz8 to expose endpoints map[pod2:[101]] Mar 8 15:13:56.681: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-4ppz8 exposes endpoints map[pod2:[101]] (1.029864046s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-4ppz8 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-4ppz8 to expose endpoints map[] Mar 8 15:13:56.712: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-4ppz8 exposes endpoints map[] (26.583527ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:13:56.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-4ppz8" for this suite. Mar 8 15:14:18.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:14:18.846: INFO: namespace: e2e-tests-services-4ppz8, resource: bindings, ignored listing per whitelist Mar 8 15:14:18.863: INFO: namespace e2e-tests-services-4ppz8 deletion completed in 22.068941648s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:28.535 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:14:18.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-7b790eef-614f-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume secrets Mar 8 15:14:19.037: INFO: Waiting up to 5m0s for pod "pod-secrets-7b7ab18e-614f-11ea-b38e-0242ac11000f" in namespace "e2e-tests-secrets-sdfxd" to be "success or failure" Mar 8 15:14:19.047: INFO: Pod "pod-secrets-7b7ab18e-614f-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.865035ms Mar 8 15:14:21.051: INFO: Pod "pod-secrets-7b7ab18e-614f-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013718815s STEP: Saw pod success Mar 8 15:14:21.051: INFO: Pod "pod-secrets-7b7ab18e-614f-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:14:21.054: INFO: Trying to get logs from node hunter-worker pod pod-secrets-7b7ab18e-614f-11ea-b38e-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 8 15:14:21.096: INFO: Waiting for pod pod-secrets-7b7ab18e-614f-11ea-b38e-0242ac11000f to disappear Mar 8 15:14:21.100: INFO: Pod pod-secrets-7b7ab18e-614f-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:14:21.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-sdfxd" for this suite. Mar 8 15:14:27.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:14:27.167: INFO: namespace: e2e-tests-secrets-sdfxd, resource: bindings, ignored listing per whitelist Mar 8 15:14:27.227: INFO: namespace e2e-tests-secrets-sdfxd deletion completed in 6.12348787s • [SLOW TEST:8.364 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:14:27.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-upd-80737f16-614f-11ea-b38e-0242ac11000f STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:14:29.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-k8cpq" for this suite. Mar 8 15:14:51.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:14:51.467: INFO: namespace: e2e-tests-configmap-k8cpq, resource: bindings, ignored listing per whitelist Mar 8 15:14:51.495: INFO: namespace e2e-tests-configmap-k8cpq deletion completed in 22.09593204s • [SLOW TEST:24.268 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:14:51.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 8 15:14:51.563: INFO: Creating deployment "test-recreate-deployment" Mar 8 15:14:51.568: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 8 15:14:51.603: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Mar 8 15:14:53.610: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 8 15:14:53.613: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 8 15:14:53.621: INFO: Updating deployment test-recreate-deployment Mar 8 15:14:53.621: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 8 15:14:53.920: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-jg8pn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jg8pn/deployments/test-recreate-deployment,UID:8ee10d34-614f-11ea-9978-0242ac11000d,ResourceVersion:8639,Generation:2,CreationTimestamp:2020-03-08 15:14:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-03-08 15:14:53 +0000 UTC 2020-03-08 15:14:53 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-03-08 15:14:53 +0000 UTC 2020-03-08 15:14:51 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Mar 8 15:14:53.923: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-jg8pn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jg8pn/replicasets/test-recreate-deployment-589c4bfd,UID:902d84ed-614f-11ea-9978-0242ac11000d,ResourceVersion:8637,Generation:1,CreationTimestamp:2020-03-08 15:14:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 8ee10d34-614f-11ea-9978-0242ac11000d 0xc0017c578f 0xc0017c57a0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 8 15:14:53.923: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 8 15:14:53.923: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-jg8pn,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jg8pn/replicasets/test-recreate-deployment-5bf7f65dc,UID:8ee70f2c-614f-11ea-9978-0242ac11000d,ResourceVersion:8627,Generation:2,CreationTimestamp:2020-03-08 15:14:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 8ee10d34-614f-11ea-9978-0242ac11000d 0xc0017c59f0 0xc0017c59f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 8 15:14:53.945: INFO: Pod "test-recreate-deployment-589c4bfd-8wkbf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-8wkbf,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-jg8pn,SelfLink:/api/v1/namespaces/e2e-tests-deployment-jg8pn/pods/test-recreate-deployment-589c4bfd-8wkbf,UID:902f21d9-614f-11ea-9978-0242ac11000d,ResourceVersion:8638,Generation:0,CreationTimestamp:2020-03-08 15:14:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 902d84ed-614f-11ea-9978-0242ac11000d 0xc001e3a9af 0xc001e3a9c0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-w9f67 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w9f67,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-w9f67 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e3ae40} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e3ae60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:14:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:14:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:14:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:14:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-08 15:14:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:14:53.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-jg8pn" for this suite. Mar 8 15:14:59.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:15:00.042: INFO: namespace: e2e-tests-deployment-jg8pn, resource: bindings, ignored listing per whitelist Mar 8 15:15:00.048: INFO: namespace e2e-tests-deployment-jg8pn deletion completed in 6.099504408s • [SLOW TEST:8.553 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:15:00.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 8 15:15:04.234: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 15:15:04.257: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 15:15:06.257: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 15:15:06.262: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 15:15:08.257: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 15:15:08.261: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 15:15:10.257: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 15:15:10.261: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 15:15:12.257: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 15:15:12.261: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 15:15:14.257: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 15:15:14.261: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 15:15:16.257: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 15:15:16.261: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 15:15:18.257: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 15:15:18.261: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 15:15:20.257: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 15:15:20.262: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 15:15:22.257: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 15:15:22.260: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 15:15:24.257: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 15:15:24.261: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 15:15:26.257: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 15:15:26.261: INFO: Pod pod-with-poststart-exec-hook still exists Mar 8 15:15:28.257: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 8 15:15:28.260: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:15:28.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-9vcm7" for this suite. Mar 8 15:15:50.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:15:50.315: INFO: namespace: e2e-tests-container-lifecycle-hook-9vcm7, resource: bindings, ignored listing per whitelist Mar 8 15:15:50.353: INFO: namespace e2e-tests-container-lifecycle-hook-9vcm7 deletion completed in 22.089743131s • [SLOW TEST:50.305 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:15:50.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 8 15:15:50.455: INFO: Waiting up to 5m0s for pod "pod-b1fa52a9-614f-11ea-b38e-0242ac11000f" in namespace "e2e-tests-emptydir-rhz9g" to be "success or failure" Mar 8 15:15:50.461: INFO: Pod "pod-b1fa52a9-614f-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.643209ms Mar 8 15:15:52.466: INFO: Pod "pod-b1fa52a9-614f-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01096148s STEP: Saw pod success Mar 8 15:15:52.466: INFO: Pod "pod-b1fa52a9-614f-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:15:52.469: INFO: Trying to get logs from node hunter-worker pod pod-b1fa52a9-614f-11ea-b38e-0242ac11000f container test-container: STEP: delete the pod Mar 8 15:15:52.487: INFO: Waiting for pod pod-b1fa52a9-614f-11ea-b38e-0242ac11000f to disappear Mar 8 15:15:52.506: INFO: Pod pod-b1fa52a9-614f-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:15:52.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-rhz9g" for this suite. Mar 8 15:15:58.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:15:58.719: INFO: namespace: e2e-tests-emptydir-rhz9g, resource: bindings, ignored listing per whitelist Mar 8 15:15:58.721: INFO: namespace e2e-tests-emptydir-rhz9g deletion completed in 6.211611412s • [SLOW TEST:8.368 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:15:58.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 8 15:15:58.835: INFO: Waiting up to 5m0s for pod "pod-b6f74bd3-614f-11ea-b38e-0242ac11000f" in namespace "e2e-tests-emptydir-rv99f" to be "success or failure" Mar 8 15:15:58.839: INFO: Pod "pod-b6f74bd3-614f-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.938556ms Mar 8 15:16:00.844: INFO: Pod "pod-b6f74bd3-614f-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008200959s STEP: Saw pod success Mar 8 15:16:00.844: INFO: Pod "pod-b6f74bd3-614f-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:16:00.847: INFO: Trying to get logs from node hunter-worker2 pod pod-b6f74bd3-614f-11ea-b38e-0242ac11000f container test-container: STEP: delete the pod Mar 8 15:16:00.865: INFO: Waiting for pod pod-b6f74bd3-614f-11ea-b38e-0242ac11000f to disappear Mar 8 15:16:00.869: INFO: Pod pod-b6f74bd3-614f-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:16:00.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-rv99f" for this suite. Mar 8 15:16:06.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:16:06.926: INFO: namespace: e2e-tests-emptydir-rv99f, resource: bindings, ignored listing per whitelist Mar 8 15:16:06.981: INFO: namespace e2e-tests-emptydir-rv99f deletion completed in 6.108366729s • [SLOW TEST:8.260 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:16:06.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 8 15:16:07.079: INFO: PodSpec: initContainers in spec.initContainers Mar 8 15:16:57.354: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-bbe3b7c2-614f-11ea-b38e-0242ac11000f", GenerateName:"", Namespace:"e2e-tests-init-container-mfhws", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-mfhws/pods/pod-init-bbe3b7c2-614f-11ea-b38e-0242ac11000f", UID:"bbe4349c-614f-11ea-9978-0242ac11000d", ResourceVersion:"9021", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719277367, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"79529166"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-457vb", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00200e240), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-457vb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-457vb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-457vb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00215c258), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0013b2c00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00215c2e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00215c300)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00215c308), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00215c30c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277367, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277367, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277367, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277367, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.11", PodIP:"10.244.2.58", StartTime:(*v1.Time)(0xc002364d80), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001add9d0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001adda40)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://cbac429e1a0685869a37fa9ca38cb3f7ab8f31ebf7fa395a1eaa0241fad268b1"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002364dc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002364da0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:16:57.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-mfhws" for this suite. Mar 8 15:17:19.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:17:19.441: INFO: namespace: e2e-tests-init-container-mfhws, resource: bindings, ignored listing per whitelist Mar 8 15:17:19.458: INFO: namespace e2e-tests-init-container-mfhws deletion completed in 22.080022068s • [SLOW TEST:72.477 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:17:19.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-ddb48 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 8 15:17:19.600: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 8 15:17:35.777: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.59 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-ddb48 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:17:35.777: INFO: >>> kubeConfig: /root/.kube/config I0308 15:17:35.815673 6 log.go:172] (0xc0000eb6b0) (0xc001384d20) Create stream I0308 15:17:35.815719 6 log.go:172] (0xc0000eb6b0) (0xc001384d20) Stream added, broadcasting: 1 I0308 15:17:35.818022 6 log.go:172] (0xc0000eb6b0) Reply frame received for 1 I0308 15:17:35.818062 6 log.go:172] (0xc0000eb6b0) (0xc0011bdcc0) Create stream I0308 15:17:35.818076 6 log.go:172] (0xc0000eb6b0) (0xc0011bdcc0) Stream added, broadcasting: 3 I0308 15:17:35.819299 6 log.go:172] (0xc0000eb6b0) Reply frame received for 3 I0308 15:17:35.819333 6 log.go:172] (0xc0000eb6b0) (0xc001384dc0) Create stream I0308 15:17:35.819345 6 log.go:172] (0xc0000eb6b0) (0xc001384dc0) Stream added, broadcasting: 5 I0308 15:17:35.820368 6 log.go:172] (0xc0000eb6b0) Reply frame received for 5 I0308 15:17:36.890787 6 log.go:172] (0xc0000eb6b0) Data frame received for 3 I0308 15:17:36.890825 6 log.go:172] (0xc0011bdcc0) (3) Data frame handling I0308 15:17:36.890844 6 log.go:172] (0xc0011bdcc0) (3) Data frame sent I0308 15:17:36.890900 6 log.go:172] (0xc0000eb6b0) Data frame received for 3 I0308 15:17:36.890915 6 log.go:172] (0xc0011bdcc0) (3) Data frame handling I0308 15:17:36.890940 6 log.go:172] (0xc0000eb6b0) Data frame received for 5 I0308 15:17:36.890957 6 log.go:172] (0xc001384dc0) (5) Data frame handling I0308 15:17:36.892740 6 log.go:172] (0xc0000eb6b0) Data frame received for 1 I0308 15:17:36.892759 6 log.go:172] (0xc001384d20) (1) Data frame handling I0308 15:17:36.892775 6 log.go:172] (0xc001384d20) (1) Data frame sent I0308 15:17:36.892789 6 log.go:172] (0xc0000eb6b0) (0xc001384d20) Stream removed, broadcasting: 1 I0308 15:17:36.892820 6 log.go:172] (0xc0000eb6b0) Go away received I0308 15:17:36.892883 6 log.go:172] (0xc0000eb6b0) (0xc001384d20) Stream removed, broadcasting: 1 I0308 15:17:36.892902 6 log.go:172] (0xc0000eb6b0) (0xc0011bdcc0) Stream removed, broadcasting: 3 I0308 15:17:36.892912 6 log.go:172] (0xc0000eb6b0) (0xc001384dc0) Stream removed, broadcasting: 5 Mar 8 15:17:36.892: INFO: Found all expected endpoints: [netserver-0] Mar 8 15:17:36.896: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.64 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-ddb48 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:17:36.896: INFO: >>> kubeConfig: /root/.kube/config I0308 15:17:36.928988 6 log.go:172] (0xc0020ee2c0) (0xc0017ab180) Create stream I0308 15:17:36.929018 6 log.go:172] (0xc0020ee2c0) (0xc0017ab180) Stream added, broadcasting: 1 I0308 15:17:36.931173 6 log.go:172] (0xc0020ee2c0) Reply frame received for 1 I0308 15:17:36.931211 6 log.go:172] (0xc0020ee2c0) (0xc0010560a0) Create stream I0308 15:17:36.931226 6 log.go:172] (0xc0020ee2c0) (0xc0010560a0) Stream added, broadcasting: 3 I0308 15:17:36.932136 6 log.go:172] (0xc0020ee2c0) Reply frame received for 3 I0308 15:17:36.932171 6 log.go:172] (0xc0020ee2c0) (0xc001056140) Create stream I0308 15:17:36.932182 6 log.go:172] (0xc0020ee2c0) (0xc001056140) Stream added, broadcasting: 5 I0308 15:17:36.933070 6 log.go:172] (0xc0020ee2c0) Reply frame received for 5 I0308 15:17:37.990749 6 log.go:172] (0xc0020ee2c0) Data frame received for 3 I0308 15:17:37.990793 6 log.go:172] (0xc0010560a0) (3) Data frame handling I0308 15:17:37.990809 6 log.go:172] (0xc0010560a0) (3) Data frame sent I0308 15:17:37.990824 6 log.go:172] (0xc0020ee2c0) Data frame received for 3 I0308 15:17:37.990837 6 log.go:172] (0xc0010560a0) (3) Data frame handling I0308 15:17:37.990851 6 log.go:172] (0xc0020ee2c0) Data frame received for 5 I0308 15:17:37.990860 6 log.go:172] (0xc001056140) (5) Data frame handling I0308 15:17:37.992039 6 log.go:172] (0xc0020ee2c0) Data frame received for 1 I0308 15:17:37.992061 6 log.go:172] (0xc0017ab180) (1) Data frame handling I0308 15:17:37.992081 6 log.go:172] (0xc0017ab180) (1) Data frame sent I0308 15:17:37.992098 6 log.go:172] (0xc0020ee2c0) (0xc0017ab180) Stream removed, broadcasting: 1 I0308 15:17:37.992137 6 log.go:172] (0xc0020ee2c0) Go away received I0308 15:17:37.992214 6 log.go:172] (0xc0020ee2c0) (0xc0017ab180) Stream removed, broadcasting: 1 I0308 15:17:37.992240 6 log.go:172] (0xc0020ee2c0) (0xc0010560a0) Stream removed, broadcasting: 3 I0308 15:17:37.992253 6 log.go:172] (0xc0020ee2c0) (0xc001056140) Stream removed, broadcasting: 5 Mar 8 15:17:37.992: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:17:37.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-ddb48" for this suite. Mar 8 15:18:00.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:18:00.098: INFO: namespace: e2e-tests-pod-network-test-ddb48, resource: bindings, ignored listing per whitelist Mar 8 15:18:00.115: INFO: namespace e2e-tests-pod-network-test-ddb48 deletion completed in 22.120259061s • [SLOW TEST:40.658 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:18:00.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Mar 8 15:18:00.519: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Mar 8 15:18:00.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-49xg9' Mar 8 15:18:00.879: INFO: stderr: "" Mar 8 15:18:00.879: INFO: stdout: "service/redis-slave created\n" Mar 8 15:18:00.879: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Mar 8 15:18:00.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-49xg9' Mar 8 15:18:01.166: INFO: stderr: "" Mar 8 15:18:01.167: INFO: stdout: "service/redis-master created\n" Mar 8 15:18:01.167: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 8 15:18:01.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-49xg9' Mar 8 15:18:01.469: INFO: stderr: "" Mar 8 15:18:01.469: INFO: stdout: "service/frontend created\n" Mar 8 15:18:01.470: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Mar 8 15:18:01.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-49xg9' Mar 8 15:18:01.752: INFO: stderr: "" Mar 8 15:18:01.752: INFO: stdout: "deployment.extensions/frontend created\n" Mar 8 15:18:01.752: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 8 15:18:01.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-49xg9' Mar 8 15:18:02.095: INFO: stderr: "" Mar 8 15:18:02.095: INFO: stdout: "deployment.extensions/redis-master created\n" Mar 8 15:18:02.095: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Mar 8 15:18:02.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-49xg9' Mar 8 15:18:02.397: INFO: stderr: "" Mar 8 15:18:02.397: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Mar 8 15:18:02.397: INFO: Waiting for all frontend pods to be Running. Mar 8 15:18:07.447: INFO: Waiting for frontend to serve content. Mar 8 15:18:07.461: INFO: Trying to add a new entry to the guestbook. Mar 8 15:18:07.471: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 8 15:18:07.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-49xg9' Mar 8 15:18:07.647: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 15:18:07.647: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Mar 8 15:18:07.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-49xg9' Mar 8 15:18:07.831: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 15:18:07.831: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 8 15:18:07.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-49xg9' Mar 8 15:18:07.946: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 15:18:07.946: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 8 15:18:07.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-49xg9' Mar 8 15:18:08.019: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 15:18:08.019: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 8 15:18:08.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-49xg9' Mar 8 15:18:08.102: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 15:18:08.102: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Mar 8 15:18:08.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-49xg9' Mar 8 15:18:08.229: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 15:18:08.229: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:18:08.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-49xg9" for this suite. Mar 8 15:18:48.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:18:48.447: INFO: namespace: e2e-tests-kubectl-49xg9, resource: bindings, ignored listing per whitelist Mar 8 15:18:48.521: INFO: namespace e2e-tests-kubectl-49xg9 deletion completed in 40.243797149s • [SLOW TEST:48.405 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:18:48.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 8 15:18:48.712: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 8 15:18:53.716: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 8 15:18:53.716: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 8 15:18:55.720: INFO: Creating deployment "test-rollover-deployment" Mar 8 15:18:55.763: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 8 15:18:57.770: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 8 15:18:57.776: INFO: Ensure that both replica sets have 1 created replica Mar 8 15:18:57.782: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 8 15:18:57.790: INFO: Updating deployment test-rollover-deployment Mar 8 15:18:57.791: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 8 15:18:59.798: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 8 15:18:59.804: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 8 15:18:59.809: INFO: all replica sets need to contain the pod-template-hash label Mar 8 15:18:59.809: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277535, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277535, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277535, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 15:19:01.816: INFO: all replica sets need to contain the pod-template-hash label Mar 8 15:19:01.816: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277535, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277535, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277535, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 15:19:03.816: INFO: all replica sets need to contain the pod-template-hash label Mar 8 15:19:03.816: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277535, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277535, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277535, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 15:19:05.816: INFO: all replica sets need to contain the pod-template-hash label Mar 8 15:19:05.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277535, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277535, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277535, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 15:19:07.817: INFO: all replica sets need to contain the pod-template-hash label Mar 8 15:19:07.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277535, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277535, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277539, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719277535, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 8 15:19:09.881: INFO: Mar 8 15:19:09.881: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 8 15:19:09.894: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-t8rcv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-t8rcv/deployments/test-rollover-deployment,UID:20687c36-6150-11ea-9978-0242ac11000d,ResourceVersion:9622,Generation:2,CreationTimestamp:2020-03-08 15:18:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-08 15:18:55 +0000 UTC 2020-03-08 15:18:55 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-08 15:19:09 +0000 UTC 2020-03-08 15:18:55 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 8 15:19:10.214: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-t8rcv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-t8rcv/replicasets/test-rollover-deployment-5b8479fdb6,UID:21a47fe7-6150-11ea-9978-0242ac11000d,ResourceVersion:9613,Generation:2,CreationTimestamp:2020-03-08 15:18:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 20687c36-6150-11ea-9978-0242ac11000d 0xc00272f177 0xc00272f178}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 8 15:19:10.214: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 8 15:19:10.214: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-t8rcv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-t8rcv/replicasets/test-rollover-controller,UID:1c307903-6150-11ea-9978-0242ac11000d,ResourceVersion:9621,Generation:2,CreationTimestamp:2020-03-08 15:18:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 20687c36-6150-11ea-9978-0242ac11000d 0xc00272efef 0xc00272f000}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 8 15:19:10.214: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-t8rcv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-t8rcv/replicasets/test-rollover-deployment-58494b7559,UID:2070c822-6150-11ea-9978-0242ac11000d,ResourceVersion:9580,Generation:2,CreationTimestamp:2020-03-08 15:18:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 20687c36-6150-11ea-9978-0242ac11000d 0xc00272f0b7 0xc00272f0b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 8 15:19:10.218: INFO: Pod "test-rollover-deployment-5b8479fdb6-4r868" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-4r868,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-t8rcv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-t8rcv/pods/test-rollover-deployment-5b8479fdb6-4r868,UID:21affb44-6150-11ea-9978-0242ac11000d,ResourceVersion:9591,Generation:0,CreationTimestamp:2020-03-08 15:18:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 21a47fe7-6150-11ea-9978-0242ac11000d 0xc0012e8d57 0xc0012e8d58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8qglx {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8qglx,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-8qglx true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0012e8dd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0012e8df0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:18:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:18:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:18:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:18:57 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.69,StartTime:2020-03-08 15:18:57 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-08 15:18:59 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://291a9980a9830438a56b32be5660c6219dc2990533d8c6a4360f1239cf4b7822}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:19:10.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-t8rcv" for this suite. Mar 8 15:19:16.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:19:16.308: INFO: namespace: e2e-tests-deployment-t8rcv, resource: bindings, ignored listing per whitelist Mar 8 15:19:16.345: INFO: namespace e2e-tests-deployment-t8rcv deletion completed in 6.122879628s • [SLOW TEST:27.825 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:19:16.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 8 15:19:22.477: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-2cbdf438-6150-11ea-b38e-0242ac11000f,GenerateName:,Namespace:e2e-tests-events-6pl5w,SelfLink:/api/v1/namespaces/e2e-tests-events-6pl5w/pods/send-events-2cbdf438-6150-11ea-b38e-0242ac11000f,UID:2cc5704e-6150-11ea-9978-0242ac11000d,ResourceVersion:9694,Generation:0,CreationTimestamp:2020-03-08 15:19:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 414581329,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-cbvk9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cbvk9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-cbvk9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001846440} {node.kubernetes.io/unreachable Exists NoExecute 0xc001846460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:19:16 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:19:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:19:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:19:16 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.11,PodIP:10.244.2.65,StartTime:2020-03-08 15:19:16 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-03-08 15:19:20 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://1d002dc436af7521f049fd7645bd9c91c743d55b717699333c485f2eddee9a1a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Mar 8 15:19:24.481: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 8 15:19:26.485: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:19:26.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-6pl5w" for this suite. Mar 8 15:20:04.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:20:04.528: INFO: namespace: e2e-tests-events-6pl5w, resource: bindings, ignored listing per whitelist Mar 8 15:20:04.556: INFO: namespace e2e-tests-events-6pl5w deletion completed in 38.060382559s • [SLOW TEST:48.211 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:20:04.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-497d4681-6150-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 8 15:20:04.673: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-497de3d7-6150-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-n5chd" to be "success or failure" Mar 8 15:20:04.681: INFO: Pod "pod-projected-configmaps-497de3d7-6150-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.863796ms Mar 8 15:20:06.684: INFO: Pod "pod-projected-configmaps-497de3d7-6150-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010702477s STEP: Saw pod success Mar 8 15:20:06.684: INFO: Pod "pod-projected-configmaps-497de3d7-6150-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:20:06.686: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-497de3d7-6150-11ea-b38e-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 8 15:20:06.699: INFO: Waiting for pod pod-projected-configmaps-497de3d7-6150-11ea-b38e-0242ac11000f to disappear Mar 8 15:20:06.739: INFO: Pod pod-projected-configmaps-497de3d7-6150-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:20:06.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-n5chd" for this suite. Mar 8 15:20:12.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:20:12.803: INFO: namespace: e2e-tests-projected-n5chd, resource: bindings, ignored listing per whitelist Mar 8 15:20:12.812: INFO: namespace e2e-tests-projected-n5chd deletion completed in 6.071470849s • [SLOW TEST:8.256 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:20:12.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-4e6f0f5e-6150-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume secrets Mar 8 15:20:12.972: INFO: Waiting up to 5m0s for pod "pod-secrets-4e70b6fd-6150-11ea-b38e-0242ac11000f" in namespace "e2e-tests-secrets-fjtd4" to be "success or failure" Mar 8 15:20:12.981: INFO: Pod "pod-secrets-4e70b6fd-6150-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.87398ms Mar 8 15:20:14.984: INFO: Pod "pod-secrets-4e70b6fd-6150-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012322358s STEP: Saw pod success Mar 8 15:20:14.984: INFO: Pod "pod-secrets-4e70b6fd-6150-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:20:14.987: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-4e70b6fd-6150-11ea-b38e-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 8 15:20:15.006: INFO: Waiting for pod pod-secrets-4e70b6fd-6150-11ea-b38e-0242ac11000f to disappear Mar 8 15:20:15.011: INFO: Pod pod-secrets-4e70b6fd-6150-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:20:15.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-fjtd4" for this suite. Mar 8 15:20:21.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:20:21.101: INFO: namespace: e2e-tests-secrets-fjtd4, resource: bindings, ignored listing per whitelist Mar 8 15:20:21.145: INFO: namespace e2e-tests-secrets-fjtd4 deletion completed in 6.130931726s • [SLOW TEST:8.332 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:20:21.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 8 15:20:21.234: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.196837ms) Mar 8 15:20:21.236: INFO: (1) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.159048ms) Mar 8 15:20:21.238: INFO: (2) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.951189ms) Mar 8 15:20:21.241: INFO: (3) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.341169ms) Mar 8 15:20:21.243: INFO: (4) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.352131ms) Mar 8 15:20:21.245: INFO: (5) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.239562ms) Mar 8 15:20:21.248: INFO: (6) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.219863ms) Mar 8 15:20:21.250: INFO: (7) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.052419ms) Mar 8 15:20:21.252: INFO: (8) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.992613ms) Mar 8 15:20:21.254: INFO: (9) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.998309ms) Mar 8 15:20:21.256: INFO: (10) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.92846ms) Mar 8 15:20:21.258: INFO: (11) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.993666ms) Mar 8 15:20:21.260: INFO: (12) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.359844ms) Mar 8 15:20:21.262: INFO: (13) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.960055ms) Mar 8 15:20:21.264: INFO: (14) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.799922ms) Mar 8 15:20:21.266: INFO: (15) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.039317ms) Mar 8 15:20:21.268: INFO: (16) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.862336ms) Mar 8 15:20:21.270: INFO: (17) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 1.95116ms) Mar 8 15:20:21.272: INFO: (18) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.5507ms) Mar 8 15:20:21.291: INFO: (19) /api/v1/nodes/hunter-worker:10250/proxy/logs/:
containers/
pods/
(200; 18.758495ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:20:21.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-proxy-j54sm" for this suite. Mar 8 15:20:27.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:20:27.329: INFO: namespace: e2e-tests-proxy-j54sm, resource: bindings, ignored listing per whitelist Mar 8 15:20:27.357: INFO: namespace e2e-tests-proxy-j54sm deletion completed in 6.063427086s • [SLOW TEST:6.212 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:20:27.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-5710ec24-6150-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 8 15:20:27.450: INFO: Waiting up to 5m0s for pod "pod-configmaps-5711a70d-6150-11ea-b38e-0242ac11000f" in namespace "e2e-tests-configmap-7wwvj" to be "success or failure" Mar 8 15:20:27.452: INFO: Pod "pod-configmaps-5711a70d-6150-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.983722ms Mar 8 15:20:29.455: INFO: Pod "pod-configmaps-5711a70d-6150-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005076753s Mar 8 15:20:31.457: INFO: Pod "pod-configmaps-5711a70d-6150-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007168255s STEP: Saw pod success Mar 8 15:20:31.457: INFO: Pod "pod-configmaps-5711a70d-6150-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:20:31.458: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-5711a70d-6150-11ea-b38e-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 8 15:20:31.487: INFO: Waiting for pod pod-configmaps-5711a70d-6150-11ea-b38e-0242ac11000f to disappear Mar 8 15:20:31.499: INFO: Pod pod-configmaps-5711a70d-6150-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:20:31.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-7wwvj" for this suite. Mar 8 15:20:37.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:20:37.583: INFO: namespace: e2e-tests-configmap-7wwvj, resource: bindings, ignored listing per whitelist Mar 8 15:20:37.619: INFO: namespace e2e-tests-configmap-7wwvj deletion completed in 6.118419085s • [SLOW TEST:10.262 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:20:37.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 8 15:20:37.714: INFO: Waiting up to 5m0s for pod "downward-api-5d32832e-6150-11ea-b38e-0242ac11000f" in namespace "e2e-tests-downward-api-rptjn" to be "success or failure" Mar 8 15:20:37.724: INFO: Pod "downward-api-5d32832e-6150-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.289722ms Mar 8 15:20:39.728: INFO: Pod "downward-api-5d32832e-6150-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013538687s STEP: Saw pod success Mar 8 15:20:39.728: INFO: Pod "downward-api-5d32832e-6150-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:20:39.731: INFO: Trying to get logs from node hunter-worker2 pod downward-api-5d32832e-6150-11ea-b38e-0242ac11000f container dapi-container: STEP: delete the pod Mar 8 15:20:39.749: INFO: Waiting for pod downward-api-5d32832e-6150-11ea-b38e-0242ac11000f to disappear Mar 8 15:20:39.766: INFO: Pod downward-api-5d32832e-6150-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:20:39.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-rptjn" for this suite. Mar 8 15:20:45.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:20:45.851: INFO: namespace: e2e-tests-downward-api-rptjn, resource: bindings, ignored listing per whitelist Mar 8 15:20:45.852: INFO: namespace e2e-tests-downward-api-rptjn deletion completed in 6.083110182s • [SLOW TEST:8.233 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:20:45.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Mar 8 15:20:45.941: INFO: Waiting up to 5m0s for pod "client-containers-621a3a36-6150-11ea-b38e-0242ac11000f" in namespace "e2e-tests-containers-fsmtg" to be "success or failure" Mar 8 15:20:45.956: INFO: Pod "client-containers-621a3a36-6150-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.824092ms Mar 8 15:20:47.959: INFO: Pod "client-containers-621a3a36-6150-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018093249s STEP: Saw pod success Mar 8 15:20:47.959: INFO: Pod "client-containers-621a3a36-6150-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:20:47.961: INFO: Trying to get logs from node hunter-worker pod client-containers-621a3a36-6150-11ea-b38e-0242ac11000f container test-container: STEP: delete the pod Mar 8 15:20:47.982: INFO: Waiting for pod client-containers-621a3a36-6150-11ea-b38e-0242ac11000f to disappear Mar 8 15:20:48.013: INFO: Pod client-containers-621a3a36-6150-11ea-b38e-0242ac11000f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:20:48.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-fsmtg" for this suite. Mar 8 15:20:54.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:20:54.146: INFO: namespace: e2e-tests-containers-fsmtg, resource: bindings, ignored listing per whitelist Mar 8 15:20:54.161: INFO: namespace e2e-tests-containers-fsmtg deletion completed in 6.144700946s • [SLOW TEST:8.308 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:20:54.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-4cnn4/configmap-test-67164115-6150-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 8 15:20:54.328: INFO: Waiting up to 5m0s for pod "pod-configmaps-671764df-6150-11ea-b38e-0242ac11000f" in namespace "e2e-tests-configmap-4cnn4" to be "success or failure" Mar 8 15:20:54.381: INFO: Pod "pod-configmaps-671764df-6150-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 53.196789ms Mar 8 15:20:56.385: INFO: Pod "pod-configmaps-671764df-6150-11ea-b38e-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 2.057063295s Mar 8 15:20:58.388: INFO: Pod "pod-configmaps-671764df-6150-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060876091s STEP: Saw pod success Mar 8 15:20:58.388: INFO: Pod "pod-configmaps-671764df-6150-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:20:58.391: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-671764df-6150-11ea-b38e-0242ac11000f container env-test: STEP: delete the pod Mar 8 15:20:58.413: INFO: Waiting for pod pod-configmaps-671764df-6150-11ea-b38e-0242ac11000f to disappear Mar 8 15:20:58.416: INFO: Pod pod-configmaps-671764df-6150-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:20:58.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4cnn4" for this suite. Mar 8 15:21:04.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:21:05.396: INFO: namespace: e2e-tests-configmap-4cnn4, resource: bindings, ignored listing per whitelist Mar 8 15:21:05.419: INFO: namespace e2e-tests-configmap-4cnn4 deletion completed in 6.998748558s • [SLOW TEST:11.258 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:21:05.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 8 15:21:05.538: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 15:21:05.574: INFO: Waiting for terminating namespaces to be deleted... Mar 8 15:21:05.576: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Mar 8 15:21:05.582: INFO: kube-proxy-h66sh from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 8 15:21:05.582: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 15:21:05.582: INFO: kindnet-jjqmp from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 8 15:21:05.582: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 15:21:05.582: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Mar 8 15:21:05.585: INFO: kube-proxy-chv9d from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 8 15:21:05.585: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 15:21:05.585: INFO: kindnet-nwqfj from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 8 15:21:05.585: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-6f08167b-6150-11ea-b38e-0242ac11000f 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-6f08167b-6150-11ea-b38e-0242ac11000f off the node hunter-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-6f08167b-6150-11ea-b38e-0242ac11000f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:21:09.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-pjqj5" for this suite. Mar 8 15:21:17.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:21:17.840: INFO: namespace: e2e-tests-sched-pred-pjqj5, resource: bindings, ignored listing per whitelist Mar 8 15:21:17.864: INFO: namespace e2e-tests-sched-pred-pjqj5 deletion completed in 8.105693413s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:12.445 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:21:17.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 8 15:21:18.018: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7535f1b9-6150-11ea-b38e-0242ac11000f" in namespace "e2e-tests-downward-api-4spng" to be "success or failure" Mar 8 15:21:18.081: INFO: Pod "downwardapi-volume-7535f1b9-6150-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 63.562994ms Mar 8 15:21:20.085: INFO: Pod "downwardapi-volume-7535f1b9-6150-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067160511s Mar 8 15:21:22.088: INFO: Pod "downwardapi-volume-7535f1b9-6150-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070486903s STEP: Saw pod success Mar 8 15:21:22.088: INFO: Pod "downwardapi-volume-7535f1b9-6150-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:21:22.090: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-7535f1b9-6150-11ea-b38e-0242ac11000f container client-container: STEP: delete the pod Mar 8 15:21:22.159: INFO: Waiting for pod downwardapi-volume-7535f1b9-6150-11ea-b38e-0242ac11000f to disappear Mar 8 15:21:22.165: INFO: Pod downwardapi-volume-7535f1b9-6150-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:21:22.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4spng" for this suite. Mar 8 15:21:28.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:21:28.275: INFO: namespace: e2e-tests-downward-api-4spng, resource: bindings, ignored listing per whitelist Mar 8 15:21:28.304: INFO: namespace e2e-tests-downward-api-4spng deletion completed in 6.136189045s • [SLOW TEST:10.439 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:21:28.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 8 15:21:28.493: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-znmss,SelfLink:/api/v1/namespaces/e2e-tests-watch-znmss/configmaps/e2e-watch-test-watch-closed,UID:7b6f12c7-6150-11ea-9978-0242ac11000d,ResourceVersion:10191,Generation:0,CreationTimestamp:2020-03-08 15:21:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 8 15:21:28.493: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-znmss,SelfLink:/api/v1/namespaces/e2e-tests-watch-znmss/configmaps/e2e-watch-test-watch-closed,UID:7b6f12c7-6150-11ea-9978-0242ac11000d,ResourceVersion:10192,Generation:0,CreationTimestamp:2020-03-08 15:21:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 8 15:21:28.544: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-znmss,SelfLink:/api/v1/namespaces/e2e-tests-watch-znmss/configmaps/e2e-watch-test-watch-closed,UID:7b6f12c7-6150-11ea-9978-0242ac11000d,ResourceVersion:10193,Generation:0,CreationTimestamp:2020-03-08 15:21:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 8 15:21:28.544: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-znmss,SelfLink:/api/v1/namespaces/e2e-tests-watch-znmss/configmaps/e2e-watch-test-watch-closed,UID:7b6f12c7-6150-11ea-9978-0242ac11000d,ResourceVersion:10194,Generation:0,CreationTimestamp:2020-03-08 15:21:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:21:28.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-znmss" for this suite. Mar 8 15:21:34.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:21:34.608: INFO: namespace: e2e-tests-watch-znmss, resource: bindings, ignored listing per whitelist Mar 8 15:21:34.632: INFO: namespace e2e-tests-watch-znmss deletion completed in 6.077081989s • [SLOW TEST:6.329 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:21:34.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-7f27d471-6150-11ea-b38e-0242ac11000f Mar 8 15:21:34.712: INFO: Pod name my-hostname-basic-7f27d471-6150-11ea-b38e-0242ac11000f: Found 0 pods out of 1 Mar 8 15:21:39.717: INFO: Pod name my-hostname-basic-7f27d471-6150-11ea-b38e-0242ac11000f: Found 1 pods out of 1 Mar 8 15:21:39.717: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-7f27d471-6150-11ea-b38e-0242ac11000f" are running Mar 8 15:21:39.720: INFO: Pod "my-hostname-basic-7f27d471-6150-11ea-b38e-0242ac11000f-562lx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 15:21:34 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 15:21:36 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 15:21:36 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-08 15:21:34 +0000 UTC Reason: Message:}]) Mar 8 15:21:39.720: INFO: Trying to dial the pod Mar 8 15:21:44.740: INFO: Controller my-hostname-basic-7f27d471-6150-11ea-b38e-0242ac11000f: Got expected result from replica 1 [my-hostname-basic-7f27d471-6150-11ea-b38e-0242ac11000f-562lx]: "my-hostname-basic-7f27d471-6150-11ea-b38e-0242ac11000f-562lx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:21:44.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-7kz65" for this suite. Mar 8 15:21:50.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:21:50.782: INFO: namespace: e2e-tests-replication-controller-7kz65, resource: bindings, ignored listing per whitelist Mar 8 15:21:50.834: INFO: namespace e2e-tests-replication-controller-7kz65 deletion completed in 6.090800594s • [SLOW TEST:16.202 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:21:50.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:21:53.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-qzpxz" for this suite. Mar 8 15:22:43.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:22:43.224: INFO: namespace: e2e-tests-kubelet-test-qzpxz, resource: bindings, ignored listing per whitelist Mar 8 15:22:43.264: INFO: namespace e2e-tests-kubelet-test-qzpxz deletion completed in 50.083176421s • [SLOW TEST:52.430 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:22:43.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-2tpc STEP: Creating a pod to test atomic-volume-subpath Mar 8 15:22:43.493: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2tpc" in namespace "e2e-tests-subpath-4l96k" to be "success or failure" Mar 8 15:22:43.509: INFO: Pod "pod-subpath-test-configmap-2tpc": Phase="Pending", Reason="", readiness=false. Elapsed: 15.928653ms Mar 8 15:22:45.512: INFO: Pod "pod-subpath-test-configmap-2tpc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019472148s Mar 8 15:22:47.515: INFO: Pod "pod-subpath-test-configmap-2tpc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021958838s Mar 8 15:22:49.518: INFO: Pod "pod-subpath-test-configmap-2tpc": Phase="Running", Reason="", readiness=false. Elapsed: 6.025582289s Mar 8 15:22:51.522: INFO: Pod "pod-subpath-test-configmap-2tpc": Phase="Running", Reason="", readiness=false. Elapsed: 8.029328328s Mar 8 15:22:53.526: INFO: Pod "pod-subpath-test-configmap-2tpc": Phase="Running", Reason="", readiness=false. Elapsed: 10.033645113s Mar 8 15:22:55.531: INFO: Pod "pod-subpath-test-configmap-2tpc": Phase="Running", Reason="", readiness=false. Elapsed: 12.038199223s Mar 8 15:22:57.535: INFO: Pod "pod-subpath-test-configmap-2tpc": Phase="Running", Reason="", readiness=false. Elapsed: 14.041942663s Mar 8 15:22:59.539: INFO: Pod "pod-subpath-test-configmap-2tpc": Phase="Running", Reason="", readiness=false. Elapsed: 16.046310385s Mar 8 15:23:01.543: INFO: Pod "pod-subpath-test-configmap-2tpc": Phase="Running", Reason="", readiness=false. Elapsed: 18.05074331s Mar 8 15:23:03.548: INFO: Pod "pod-subpath-test-configmap-2tpc": Phase="Running", Reason="", readiness=false. Elapsed: 20.055048281s Mar 8 15:23:05.552: INFO: Pod "pod-subpath-test-configmap-2tpc": Phase="Running", Reason="", readiness=false. Elapsed: 22.059341183s Mar 8 15:23:07.555: INFO: Pod "pod-subpath-test-configmap-2tpc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.0627316s STEP: Saw pod success Mar 8 15:23:07.555: INFO: Pod "pod-subpath-test-configmap-2tpc" satisfied condition "success or failure" Mar 8 15:23:07.558: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-2tpc container test-container-subpath-configmap-2tpc: STEP: delete the pod Mar 8 15:23:07.600: INFO: Waiting for pod pod-subpath-test-configmap-2tpc to disappear Mar 8 15:23:07.612: INFO: Pod pod-subpath-test-configmap-2tpc no longer exists STEP: Deleting pod pod-subpath-test-configmap-2tpc Mar 8 15:23:07.612: INFO: Deleting pod "pod-subpath-test-configmap-2tpc" in namespace "e2e-tests-subpath-4l96k" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:23:07.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-4l96k" for this suite. Mar 8 15:23:13.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:23:13.660: INFO: namespace: e2e-tests-subpath-4l96k, resource: bindings, ignored listing per whitelist Mar 8 15:23:13.704: INFO: namespace e2e-tests-subpath-4l96k deletion completed in 6.087401698s • [SLOW TEST:30.439 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:23:13.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-7rlpn STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 8 15:23:13.815: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 8 15:23:31.937: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.70:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-7rlpn PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:23:31.937: INFO: >>> kubeConfig: /root/.kube/config I0308 15:23:31.976064 6 log.go:172] (0xc000ace4d0) (0xc000aed180) Create stream I0308 15:23:31.976098 6 log.go:172] (0xc000ace4d0) (0xc000aed180) Stream added, broadcasting: 1 I0308 15:23:31.979825 6 log.go:172] (0xc000ace4d0) Reply frame received for 1 I0308 15:23:31.979884 6 log.go:172] (0xc000ace4d0) (0xc0015c1720) Create stream I0308 15:23:31.979906 6 log.go:172] (0xc000ace4d0) (0xc0015c1720) Stream added, broadcasting: 3 I0308 15:23:31.984481 6 log.go:172] (0xc000ace4d0) Reply frame received for 3 I0308 15:23:31.984521 6 log.go:172] (0xc000ace4d0) (0xc0015c17c0) Create stream I0308 15:23:31.984537 6 log.go:172] (0xc000ace4d0) (0xc0015c17c0) Stream added, broadcasting: 5 I0308 15:23:31.985668 6 log.go:172] (0xc000ace4d0) Reply frame received for 5 I0308 15:23:32.052228 6 log.go:172] (0xc000ace4d0) Data frame received for 3 I0308 15:23:32.052260 6 log.go:172] (0xc0015c1720) (3) Data frame handling I0308 15:23:32.052271 6 log.go:172] (0xc0015c1720) (3) Data frame sent I0308 15:23:32.052291 6 log.go:172] (0xc000ace4d0) Data frame received for 5 I0308 15:23:32.052302 6 log.go:172] (0xc0015c17c0) (5) Data frame handling I0308 15:23:32.052583 6 log.go:172] (0xc000ace4d0) Data frame received for 3 I0308 15:23:32.052610 6 log.go:172] (0xc0015c1720) (3) Data frame handling I0308 15:23:32.054076 6 log.go:172] (0xc000ace4d0) Data frame received for 1 I0308 15:23:32.054104 6 log.go:172] (0xc000aed180) (1) Data frame handling I0308 15:23:32.054165 6 log.go:172] (0xc000aed180) (1) Data frame sent I0308 15:23:32.054192 6 log.go:172] (0xc000ace4d0) (0xc000aed180) Stream removed, broadcasting: 1 I0308 15:23:32.054220 6 log.go:172] (0xc000ace4d0) Go away received I0308 15:23:32.054317 6 log.go:172] (0xc000ace4d0) (0xc000aed180) Stream removed, broadcasting: 1 I0308 15:23:32.054337 6 log.go:172] (0xc000ace4d0) (0xc0015c1720) Stream removed, broadcasting: 3 I0308 15:23:32.054351 6 log.go:172] (0xc000ace4d0) (0xc0015c17c0) Stream removed, broadcasting: 5 Mar 8 15:23:32.054: INFO: Found all expected endpoints: [netserver-0] Mar 8 15:23:32.057: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.79:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-7rlpn PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 15:23:32.057: INFO: >>> kubeConfig: /root/.kube/config I0308 15:23:32.092236 6 log.go:172] (0xc0021102c0) (0xc00075d540) Create stream I0308 15:23:32.092257 6 log.go:172] (0xc0021102c0) (0xc00075d540) Stream added, broadcasting: 1 I0308 15:23:32.094613 6 log.go:172] (0xc0021102c0) Reply frame received for 1 I0308 15:23:32.094649 6 log.go:172] (0xc0021102c0) (0xc00075d5e0) Create stream I0308 15:23:32.094662 6 log.go:172] (0xc0021102c0) (0xc00075d5e0) Stream added, broadcasting: 3 I0308 15:23:32.095940 6 log.go:172] (0xc0021102c0) Reply frame received for 3 I0308 15:23:32.095970 6 log.go:172] (0xc0021102c0) (0xc000aed220) Create stream I0308 15:23:32.095981 6 log.go:172] (0xc0021102c0) (0xc000aed220) Stream added, broadcasting: 5 I0308 15:23:32.096973 6 log.go:172] (0xc0021102c0) Reply frame received for 5 I0308 15:23:32.167395 6 log.go:172] (0xc0021102c0) Data frame received for 3 I0308 15:23:32.167421 6 log.go:172] (0xc00075d5e0) (3) Data frame handling I0308 15:23:32.167442 6 log.go:172] (0xc00075d5e0) (3) Data frame sent I0308 15:23:32.167455 6 log.go:172] (0xc0021102c0) Data frame received for 3 I0308 15:23:32.167467 6 log.go:172] (0xc00075d5e0) (3) Data frame handling I0308 15:23:32.167547 6 log.go:172] (0xc0021102c0) Data frame received for 5 I0308 15:23:32.167571 6 log.go:172] (0xc000aed220) (5) Data frame handling I0308 15:23:32.168965 6 log.go:172] (0xc0021102c0) Data frame received for 1 I0308 15:23:32.168986 6 log.go:172] (0xc00075d540) (1) Data frame handling I0308 15:23:32.169001 6 log.go:172] (0xc00075d540) (1) Data frame sent I0308 15:23:32.169016 6 log.go:172] (0xc0021102c0) (0xc00075d540) Stream removed, broadcasting: 1 I0308 15:23:32.169027 6 log.go:172] (0xc0021102c0) Go away received I0308 15:23:32.169165 6 log.go:172] (0xc0021102c0) (0xc00075d540) Stream removed, broadcasting: 1 I0308 15:23:32.169184 6 log.go:172] (0xc0021102c0) (0xc00075d5e0) Stream removed, broadcasting: 3 I0308 15:23:32.169199 6 log.go:172] (0xc0021102c0) (0xc000aed220) Stream removed, broadcasting: 5 Mar 8 15:23:32.169: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:23:32.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-7rlpn" for this suite. Mar 8 15:23:54.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:23:54.248: INFO: namespace: e2e-tests-pod-network-test-7rlpn, resource: bindings, ignored listing per whitelist Mar 8 15:23:54.291: INFO: namespace e2e-tests-pod-network-test-7rlpn deletion completed in 22.118360571s • [SLOW TEST:40.587 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:23:54.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Mar 8 15:23:54.405: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-lzhqk" to be "success or failure" Mar 8 15:23:54.409: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117661ms Mar 8 15:23:56.413: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00745545s Mar 8 15:23:58.418: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012863901s STEP: Saw pod success Mar 8 15:23:58.418: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 8 15:23:58.421: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 8 15:23:58.511: INFO: Waiting for pod pod-host-path-test to disappear Mar 8 15:23:58.516: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:23:58.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-lzhqk" for this suite. Mar 8 15:24:04.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:24:04.623: INFO: namespace: e2e-tests-hostpath-lzhqk, resource: bindings, ignored listing per whitelist Mar 8 15:24:04.630: INFO: namespace e2e-tests-hostpath-lzhqk deletion completed in 6.111648356s • [SLOW TEST:10.339 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:24:04.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 8 15:24:04.732: INFO: Waiting up to 5m0s for pod "pod-d89676e4-6150-11ea-b38e-0242ac11000f" in namespace "e2e-tests-emptydir-5knhx" to be "success or failure" Mar 8 15:24:04.737: INFO: Pod "pod-d89676e4-6150-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.910773ms Mar 8 15:24:06.741: INFO: Pod "pod-d89676e4-6150-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008427125s Mar 8 15:24:08.745: INFO: Pod "pod-d89676e4-6150-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012327087s STEP: Saw pod success Mar 8 15:24:08.745: INFO: Pod "pod-d89676e4-6150-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:24:08.747: INFO: Trying to get logs from node hunter-worker pod pod-d89676e4-6150-11ea-b38e-0242ac11000f container test-container: STEP: delete the pod Mar 8 15:24:08.796: INFO: Waiting for pod pod-d89676e4-6150-11ea-b38e-0242ac11000f to disappear Mar 8 15:24:08.814: INFO: Pod pod-d89676e4-6150-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:24:08.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-5knhx" for this suite. Mar 8 15:24:14.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:24:14.891: INFO: namespace: e2e-tests-emptydir-5knhx, resource: bindings, ignored listing per whitelist Mar 8 15:24:14.902: INFO: namespace e2e-tests-emptydir-5knhx deletion completed in 6.084011432s • [SLOW TEST:10.271 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:24:14.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 8 15:24:17.571: INFO: Successfully updated pod "labelsupdatedeb764d0-6150-11ea-b38e-0242ac11000f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:24:19.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-vz6ng" for this suite. Mar 8 15:24:41.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:24:41.618: INFO: namespace: e2e-tests-downward-api-vz6ng, resource: bindings, ignored listing per whitelist Mar 8 15:24:41.700: INFO: namespace e2e-tests-downward-api-vz6ng deletion completed in 22.108877472s • [SLOW TEST:26.798 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:24:41.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 8 15:24:41.795: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eeae5e11-6150-11ea-b38e-0242ac11000f" in namespace "e2e-tests-downward-api-t24lr" to be "success or failure" Mar 8 15:24:41.813: INFO: Pod "downwardapi-volume-eeae5e11-6150-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.143349ms Mar 8 15:24:43.817: INFO: Pod "downwardapi-volume-eeae5e11-6150-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022230097s Mar 8 15:24:45.820: INFO: Pod "downwardapi-volume-eeae5e11-6150-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025581152s STEP: Saw pod success Mar 8 15:24:45.820: INFO: Pod "downwardapi-volume-eeae5e11-6150-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:24:45.823: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-eeae5e11-6150-11ea-b38e-0242ac11000f container client-container: STEP: delete the pod Mar 8 15:24:45.878: INFO: Waiting for pod downwardapi-volume-eeae5e11-6150-11ea-b38e-0242ac11000f to disappear Mar 8 15:24:45.882: INFO: Pod downwardapi-volume-eeae5e11-6150-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:24:45.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-t24lr" for this suite. Mar 8 15:24:51.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:24:51.977: INFO: namespace: e2e-tests-downward-api-t24lr, resource: bindings, ignored listing per whitelist Mar 8 15:24:52.001: INFO: namespace e2e-tests-downward-api-t24lr deletion completed in 6.115407647s • [SLOW TEST:10.301 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:24:52.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-95spg/secret-test-f4d3985d-6150-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume secrets Mar 8 15:24:52.112: INFO: Waiting up to 5m0s for pod "pod-configmaps-f4d448d6-6150-11ea-b38e-0242ac11000f" in namespace "e2e-tests-secrets-95spg" to be "success or failure" Mar 8 15:24:52.187: INFO: Pod "pod-configmaps-f4d448d6-6150-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 74.205599ms Mar 8 15:24:54.190: INFO: Pod "pod-configmaps-f4d448d6-6150-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.077653481s STEP: Saw pod success Mar 8 15:24:54.190: INFO: Pod "pod-configmaps-f4d448d6-6150-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:24:54.192: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-f4d448d6-6150-11ea-b38e-0242ac11000f container env-test: STEP: delete the pod Mar 8 15:24:54.221: INFO: Waiting for pod pod-configmaps-f4d448d6-6150-11ea-b38e-0242ac11000f to disappear Mar 8 15:24:54.231: INFO: Pod pod-configmaps-f4d448d6-6150-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:24:54.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-95spg" for this suite. Mar 8 15:25:00.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:25:00.287: INFO: namespace: e2e-tests-secrets-95spg, resource: bindings, ignored listing per whitelist Mar 8 15:25:00.330: INFO: namespace e2e-tests-secrets-95spg deletion completed in 6.071860475s • [SLOW TEST:8.329 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:25:00.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 8 15:25:00.421: INFO: Requires at least 2 nodes (not -1) [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 Mar 8 15:25:00.425: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-knpws/daemonsets","resourceVersion":"10904"},"items":null} Mar 8 15:25:00.426: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-knpws/pods","resourceVersion":"10904"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:25:00.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-knpws" for this suite. Mar 8 15:25:06.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:25:06.478: INFO: namespace: e2e-tests-daemonsets-knpws, resource: bindings, ignored listing per whitelist Mar 8 15:25:06.519: INFO: namespace e2e-tests-daemonsets-knpws deletion completed in 6.085544914s S [SKIPPING] [6.189 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should rollback without unnecessary restarts [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 8 15:25:00.421: Requires at least 2 nodes (not -1) /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292 ------------------------------ SSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:25:06.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:25:06.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-s5jtd" for this suite. Mar 8 15:25:12.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:25:12.651: INFO: namespace: e2e-tests-services-s5jtd, resource: bindings, ignored listing per whitelist Mar 8 15:25:12.703: INFO: namespace e2e-tests-services-s5jtd deletion completed in 6.098192541s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:6.183 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:25:12.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-brx2d Mar 8 15:25:14.779: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-brx2d STEP: checking the pod's current state and verifying that restartCount is present Mar 8 15:25:14.781: INFO: Initial restart count of pod liveness-http is 0 Mar 8 15:25:34.822: INFO: Restart count of pod e2e-tests-container-probe-brx2d/liveness-http is now 1 (20.040498061s elapsed) Mar 8 15:25:54.867: INFO: Restart count of pod e2e-tests-container-probe-brx2d/liveness-http is now 2 (40.085734657s elapsed) Mar 8 15:26:14.920: INFO: Restart count of pod e2e-tests-container-probe-brx2d/liveness-http is now 3 (1m0.138154024s elapsed) Mar 8 15:26:35.069: INFO: Restart count of pod e2e-tests-container-probe-brx2d/liveness-http is now 4 (1m20.287984704s elapsed) Mar 8 15:27:47.254: INFO: Restart count of pod e2e-tests-container-probe-brx2d/liveness-http is now 5 (2m32.472218525s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:27:47.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-brx2d" for this suite. Mar 8 15:27:53.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:27:53.342: INFO: namespace: e2e-tests-container-probe-brx2d, resource: bindings, ignored listing per whitelist Mar 8 15:27:53.401: INFO: namespace e2e-tests-container-probe-brx2d deletion completed in 6.08811194s • [SLOW TEST:160.698 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:27:53.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Mar 8 15:27:54.029: INFO: created pod pod-service-account-defaultsa Mar 8 15:27:54.030: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 8 15:27:54.037: INFO: created pod pod-service-account-mountsa Mar 8 15:27:54.037: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 8 15:27:54.044: INFO: created pod pod-service-account-nomountsa Mar 8 15:27:54.044: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 8 15:27:54.074: INFO: created pod pod-service-account-defaultsa-mountspec Mar 8 15:27:54.074: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 8 15:27:54.079: INFO: created pod pod-service-account-mountsa-mountspec Mar 8 15:27:54.080: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 8 15:27:54.100: INFO: created pod pod-service-account-nomountsa-mountspec Mar 8 15:27:54.100: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 8 15:27:54.136: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 8 15:27:54.136: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 8 15:27:54.156: INFO: created pod pod-service-account-mountsa-nomountspec Mar 8 15:27:54.156: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 8 15:27:54.178: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 8 15:27:54.178: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:27:54.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-d9sqr" for this suite. Mar 8 15:28:00.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:28:00.332: INFO: namespace: e2e-tests-svcaccounts-d9sqr, resource: bindings, ignored listing per whitelist Mar 8 15:28:00.333: INFO: namespace e2e-tests-svcaccounts-d9sqr deletion completed in 6.124368395s • [SLOW TEST:6.932 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:28:00.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-65121089-6151-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 8 15:28:00.441: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6512ccf5-6151-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-s7kpb" to be "success or failure" Mar 8 15:28:00.449: INFO: Pod "pod-projected-configmaps-6512ccf5-6151-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.80682ms Mar 8 15:28:02.453: INFO: Pod "pod-projected-configmaps-6512ccf5-6151-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011995761s STEP: Saw pod success Mar 8 15:28:02.454: INFO: Pod "pod-projected-configmaps-6512ccf5-6151-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:28:02.456: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-6512ccf5-6151-11ea-b38e-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 8 15:28:02.487: INFO: Waiting for pod pod-projected-configmaps-6512ccf5-6151-11ea-b38e-0242ac11000f to disappear Mar 8 15:28:02.491: INFO: Pod pod-projected-configmaps-6512ccf5-6151-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:28:02.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-s7kpb" for this suite. Mar 8 15:28:08.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:28:08.595: INFO: namespace: e2e-tests-projected-s7kpb, resource: bindings, ignored listing per whitelist Mar 8 15:28:08.616: INFO: namespace e2e-tests-projected-s7kpb deletion completed in 6.121843579s • [SLOW TEST:8.283 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:28:08.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 8 15:28:08.768: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:28:08.770: INFO: Number of nodes with available pods: 0 Mar 8 15:28:08.770: INFO: Node hunter-worker is running more than one daemon pod Mar 8 15:28:09.788: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:28:09.791: INFO: Number of nodes with available pods: 0 Mar 8 15:28:09.791: INFO: Node hunter-worker is running more than one daemon pod Mar 8 15:28:10.775: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:28:10.778: INFO: Number of nodes with available pods: 2 Mar 8 15:28:10.778: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 8 15:28:10.792: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:28:10.795: INFO: Number of nodes with available pods: 1 Mar 8 15:28:10.795: INFO: Node hunter-worker is running more than one daemon pod Mar 8 15:28:11.799: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:28:11.803: INFO: Number of nodes with available pods: 1 Mar 8 15:28:11.803: INFO: Node hunter-worker is running more than one daemon pod Mar 8 15:28:12.800: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:28:12.804: INFO: Number of nodes with available pods: 1 Mar 8 15:28:12.804: INFO: Node hunter-worker is running more than one daemon pod Mar 8 15:28:13.831: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:28:13.834: INFO: Number of nodes with available pods: 1 Mar 8 15:28:13.835: INFO: Node hunter-worker is running more than one daemon pod Mar 8 15:28:14.800: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:28:14.804: INFO: Number of nodes with available pods: 1 Mar 8 15:28:14.804: INFO: Node hunter-worker is running more than one daemon pod Mar 8 15:28:15.799: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:28:15.802: INFO: Number of nodes with available pods: 1 Mar 8 15:28:15.802: INFO: Node hunter-worker is running more than one daemon pod Mar 8 15:28:16.800: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:28:16.803: INFO: Number of nodes with available pods: 2 Mar 8 15:28:16.803: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-wdjrh, will wait for the garbage collector to delete the pods Mar 8 15:28:16.870: INFO: Deleting DaemonSet.extensions daemon-set took: 5.476199ms Mar 8 15:28:16.970: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.247877ms Mar 8 15:28:20.373: INFO: Number of nodes with available pods: 0 Mar 8 15:28:20.373: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 15:28:20.375: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-wdjrh/daemonsets","resourceVersion":"11525"},"items":null} Mar 8 15:28:20.377: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-wdjrh/pods","resourceVersion":"11525"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:28:20.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-wdjrh" for this suite. Mar 8 15:28:26.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:28:26.467: INFO: namespace: e2e-tests-daemonsets-wdjrh, resource: bindings, ignored listing per whitelist Mar 8 15:28:26.469: INFO: namespace e2e-tests-daemonsets-wdjrh deletion completed in 6.083363286s • [SLOW TEST:17.852 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:28:26.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 8 15:28:26.750: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:28:31.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-g9hfd" for this suite. Mar 8 15:28:53.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:28:53.439: INFO: namespace: e2e-tests-init-container-g9hfd, resource: bindings, ignored listing per whitelist Mar 8 15:28:53.530: INFO: namespace e2e-tests-init-container-g9hfd deletion completed in 22.127495269s • [SLOW TEST:27.061 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:28:53.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 8 15:28:53.689: INFO: Waiting up to 5m0s for pod "pod-84d2109a-6151-11ea-b38e-0242ac11000f" in namespace "e2e-tests-emptydir-9c2w6" to be "success or failure" Mar 8 15:28:53.715: INFO: Pod "pod-84d2109a-6151-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.937104ms Mar 8 15:28:57.089: INFO: Pod "pod-84d2109a-6151-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3.400667678s STEP: Saw pod success Mar 8 15:28:57.089: INFO: Pod "pod-84d2109a-6151-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:28:57.275: INFO: Trying to get logs from node hunter-worker2 pod pod-84d2109a-6151-11ea-b38e-0242ac11000f container test-container: STEP: delete the pod Mar 8 15:28:57.345: INFO: Waiting for pod pod-84d2109a-6151-11ea-b38e-0242ac11000f to disappear Mar 8 15:28:57.363: INFO: Pod pod-84d2109a-6151-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:28:57.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9c2w6" for this suite. Mar 8 15:29:03.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:29:03.401: INFO: namespace: e2e-tests-emptydir-9c2w6, resource: bindings, ignored listing per whitelist Mar 8 15:29:03.446: INFO: namespace e2e-tests-emptydir-9c2w6 deletion completed in 6.079949956s • [SLOW TEST:9.916 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:29:03.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service endpoint-test2 in namespace e2e-tests-services-jdzfd STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jdzfd to expose endpoints map[] Mar 8 15:29:03.591: INFO: Get endpoints failed (15.907354ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 8 15:29:04.594: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jdzfd exposes endpoints map[] (1.019605883s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-jdzfd STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jdzfd to expose endpoints map[pod1:[80]] Mar 8 15:29:06.685: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jdzfd exposes endpoints map[pod1:[80]] (2.08405837s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-jdzfd STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jdzfd to expose endpoints map[pod1:[80] pod2:[80]] Mar 8 15:29:08.743: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jdzfd exposes endpoints map[pod1:[80] pod2:[80]] (2.054171302s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-jdzfd STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jdzfd to expose endpoints map[pod2:[80]] Mar 8 15:29:09.781: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jdzfd exposes endpoints map[pod2:[80]] (1.033535917s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-jdzfd STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-jdzfd to expose endpoints map[] Mar 8 15:29:10.797: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-jdzfd exposes endpoints map[] (1.00931736s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:29:10.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-jdzfd" for this suite. Mar 8 15:29:16.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:29:16.906: INFO: namespace: e2e-tests-services-jdzfd, resource: bindings, ignored listing per whitelist Mar 8 15:29:16.911: INFO: namespace e2e-tests-services-jdzfd deletion completed in 6.087242133s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:13.465 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:29:16.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 8 15:29:17.059: INFO: Waiting up to 5m0s for pod "downwardapi-volume-92bf1cd7-6151-11ea-b38e-0242ac11000f" in namespace "e2e-tests-downward-api-sjvtf" to be "success or failure" Mar 8 15:29:17.082: INFO: Pod "downwardapi-volume-92bf1cd7-6151-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.927201ms Mar 8 15:29:19.085: INFO: Pod "downwardapi-volume-92bf1cd7-6151-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026695739s STEP: Saw pod success Mar 8 15:29:19.086: INFO: Pod "downwardapi-volume-92bf1cd7-6151-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:29:19.106: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-92bf1cd7-6151-11ea-b38e-0242ac11000f container client-container: STEP: delete the pod Mar 8 15:29:19.137: INFO: Waiting for pod downwardapi-volume-92bf1cd7-6151-11ea-b38e-0242ac11000f to disappear Mar 8 15:29:19.146: INFO: Pod downwardapi-volume-92bf1cd7-6151-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:29:19.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-sjvtf" for this suite. Mar 8 15:29:25.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:29:25.200: INFO: namespace: e2e-tests-downward-api-sjvtf, resource: bindings, ignored listing per whitelist Mar 8 15:29:25.244: INFO: namespace e2e-tests-downward-api-sjvtf deletion completed in 6.093926001s • [SLOW TEST:8.332 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:29:25.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0308 15:29:31.377295 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 15:29:31.377: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:29:31.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-xvx6x" for this suite. Mar 8 15:29:37.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:29:37.445: INFO: namespace: e2e-tests-gc-xvx6x, resource: bindings, ignored listing per whitelist Mar 8 15:29:37.478: INFO: namespace e2e-tests-gc-xvx6x deletion completed in 6.098581353s • [SLOW TEST:12.234 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:29:37.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 8 15:29:37.589: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9efbe239-6151-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-xnjll" to be "success or failure" Mar 8 15:29:37.591: INFO: Pod "downwardapi-volume-9efbe239-6151-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137701ms Mar 8 15:29:39.595: INFO: Pod "downwardapi-volume-9efbe239-6151-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006312931s Mar 8 15:29:41.611: INFO: Pod "downwardapi-volume-9efbe239-6151-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022217285s STEP: Saw pod success Mar 8 15:29:41.611: INFO: Pod "downwardapi-volume-9efbe239-6151-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:29:41.614: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-9efbe239-6151-11ea-b38e-0242ac11000f container client-container: STEP: delete the pod Mar 8 15:29:41.634: INFO: Waiting for pod downwardapi-volume-9efbe239-6151-11ea-b38e-0242ac11000f to disappear Mar 8 15:29:41.639: INFO: Pod downwardapi-volume-9efbe239-6151-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:29:41.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xnjll" for this suite. Mar 8 15:29:47.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:29:47.745: INFO: namespace: e2e-tests-projected-xnjll, resource: bindings, ignored listing per whitelist Mar 8 15:29:47.759: INFO: namespace e2e-tests-projected-xnjll deletion completed in 6.117285515s • [SLOW TEST:10.281 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:29:47.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Mar 8 15:29:47.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kn927' Mar 8 15:29:51.327: INFO: stderr: "" Mar 8 15:29:51.327: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Mar 8 15:29:52.330: INFO: Selector matched 1 pods for map[app:redis] Mar 8 15:29:52.330: INFO: Found 0 / 1 Mar 8 15:29:53.334: INFO: Selector matched 1 pods for map[app:redis] Mar 8 15:29:53.334: INFO: Found 0 / 1 Mar 8 15:29:54.331: INFO: Selector matched 1 pods for map[app:redis] Mar 8 15:29:54.331: INFO: Found 1 / 1 Mar 8 15:29:54.331: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 8 15:29:54.333: INFO: Selector matched 1 pods for map[app:redis] Mar 8 15:29:54.333: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Mar 8 15:29:54.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6snsn redis-master --namespace=e2e-tests-kubectl-kn927' Mar 8 15:29:54.427: INFO: stderr: "" Mar 8 15:29:54.427: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 08 Mar 15:29:52.986 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 Mar 15:29:52.986 # Server started, Redis version 3.2.12\n1:M 08 Mar 15:29:52.986 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 Mar 15:29:52.986 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Mar 8 15:29:54.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6snsn redis-master --namespace=e2e-tests-kubectl-kn927 --tail=1' Mar 8 15:29:54.524: INFO: stderr: "" Mar 8 15:29:54.524: INFO: stdout: "1:M 08 Mar 15:29:52.986 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Mar 8 15:29:54.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6snsn redis-master --namespace=e2e-tests-kubectl-kn927 --limit-bytes=1' Mar 8 15:29:54.611: INFO: stderr: "" Mar 8 15:29:54.611: INFO: stdout: " " STEP: exposing timestamps Mar 8 15:29:54.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6snsn redis-master --namespace=e2e-tests-kubectl-kn927 --tail=1 --timestamps' Mar 8 15:29:54.710: INFO: stderr: "" Mar 8 15:29:54.710: INFO: stdout: "2020-03-08T15:29:52.986809252Z 1:M 08 Mar 15:29:52.986 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Mar 8 15:29:57.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6snsn redis-master --namespace=e2e-tests-kubectl-kn927 --since=1s' Mar 8 15:29:57.346: INFO: stderr: "" Mar 8 15:29:57.346: INFO: stdout: "" Mar 8 15:29:57.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-6snsn redis-master --namespace=e2e-tests-kubectl-kn927 --since=24h' Mar 8 15:29:57.441: INFO: stderr: "" Mar 8 15:29:57.441: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 08 Mar 15:29:52.986 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 Mar 15:29:52.986 # Server started, Redis version 3.2.12\n1:M 08 Mar 15:29:52.986 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 Mar 15:29:52.986 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Mar 8 15:29:57.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kn927' Mar 8 15:29:57.520: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 15:29:57.520: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Mar 8 15:29:57.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-kn927' Mar 8 15:29:57.604: INFO: stderr: "No resources found.\n" Mar 8 15:29:57.604: INFO: stdout: "" Mar 8 15:29:57.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-kn927 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 15:29:57.685: INFO: stderr: "" Mar 8 15:29:57.685: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:29:57.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kn927" for this suite. Mar 8 15:30:19.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:30:19.742: INFO: namespace: e2e-tests-kubectl-kn927, resource: bindings, ignored listing per whitelist Mar 8 15:30:19.795: INFO: namespace e2e-tests-kubectl-kn927 deletion completed in 22.107045196s • [SLOW TEST:32.036 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:30:19.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 8 15:30:19.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-mf2hk' Mar 8 15:30:19.971: INFO: stderr: "" Mar 8 15:30:19.971: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 Mar 8 15:30:19.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-mf2hk' Mar 8 15:30:27.892: INFO: stderr: "" Mar 8 15:30:27.892: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:30:27.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-mf2hk" for this suite. Mar 8 15:30:33.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:30:34.088: INFO: namespace: e2e-tests-kubectl-mf2hk, resource: bindings, ignored listing per whitelist Mar 8 15:30:34.100: INFO: namespace e2e-tests-kubectl-mf2hk deletion completed in 6.16622825s • [SLOW TEST:14.305 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:30:34.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 8 15:30:34.205: INFO: Waiting up to 5m0s for pod "pod-c0b6f65d-6151-11ea-b38e-0242ac11000f" in namespace "e2e-tests-emptydir-kkhtv" to be "success or failure" Mar 8 15:30:34.207: INFO: Pod "pod-c0b6f65d-6151-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.443561ms Mar 8 15:30:36.211: INFO: Pod "pod-c0b6f65d-6151-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006415613s Mar 8 15:30:38.215: INFO: Pod "pod-c0b6f65d-6151-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010457598s STEP: Saw pod success Mar 8 15:30:38.215: INFO: Pod "pod-c0b6f65d-6151-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:30:38.218: INFO: Trying to get logs from node hunter-worker2 pod pod-c0b6f65d-6151-11ea-b38e-0242ac11000f container test-container: STEP: delete the pod Mar 8 15:30:38.245: INFO: Waiting for pod pod-c0b6f65d-6151-11ea-b38e-0242ac11000f to disappear Mar 8 15:30:38.249: INFO: Pod pod-c0b6f65d-6151-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:30:38.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-kkhtv" for this suite. Mar 8 15:30:44.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:30:44.294: INFO: namespace: e2e-tests-emptydir-kkhtv, resource: bindings, ignored listing per whitelist Mar 8 15:30:44.343: INFO: namespace e2e-tests-emptydir-kkhtv deletion completed in 6.090253121s • [SLOW TEST:10.242 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:30:44.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 8 15:30:44.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client' Mar 8 15:30:44.529: INFO: stderr: "" Mar 8 15:30:44.529: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" Mar 8 15:30:44.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9rjhq' Mar 8 15:30:44.826: INFO: stderr: "" Mar 8 15:30:44.826: INFO: stdout: "replicationcontroller/redis-master created\n" Mar 8 15:30:44.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-9rjhq' Mar 8 15:30:45.110: INFO: stderr: "" Mar 8 15:30:45.110: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Mar 8 15:30:46.132: INFO: Selector matched 1 pods for map[app:redis] Mar 8 15:30:46.132: INFO: Found 0 / 1 Mar 8 15:30:47.114: INFO: Selector matched 1 pods for map[app:redis] Mar 8 15:30:47.114: INFO: Found 1 / 1 Mar 8 15:30:47.114: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 8 15:30:47.117: INFO: Selector matched 1 pods for map[app:redis] Mar 8 15:30:47.117: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 8 15:30:47.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-4fzgp --namespace=e2e-tests-kubectl-9rjhq' Mar 8 15:30:47.263: INFO: stderr: "" Mar 8 15:30:47.263: INFO: stdout: "Name: redis-master-4fzgp\nNamespace: e2e-tests-kubectl-9rjhq\nPriority: 0\nPriorityClassName: \nNode: hunter-worker/172.17.0.12\nStart Time: Sun, 08 Mar 2020 15:30:44 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.103\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://41eb1deed9c38da3b22101a7def6f64347cfdd55018f79f3ef0fee99ed0fae2b\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 08 Mar 2020 15:30:46 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-587rx (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-587rx:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-587rx\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned e2e-tests-kubectl-9rjhq/redis-master-4fzgp to hunter-worker\n Normal Pulled 2s kubelet, hunter-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, hunter-worker Created container\n Normal Started 1s kubelet, hunter-worker Started container\n" Mar 8 15:30:47.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-9rjhq' Mar 8 15:30:47.357: INFO: stderr: "" Mar 8 15:30:47.358: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-9rjhq\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: redis-master-4fzgp\n" Mar 8 15:30:47.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-9rjhq' Mar 8 15:30:47.438: INFO: stderr: "" Mar 8 15:30:47.438: INFO: stdout: "Name: redis-master\nNamespace: e2e-tests-kubectl-9rjhq\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.105.195.226\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.103:6379\nSession Affinity: None\nEvents: \n" Mar 8 15:30:47.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane' Mar 8 15:30:47.543: INFO: stderr: "" Mar 8 15:30:47.543: INFO: stdout: "Name: hunter-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/hostname=hunter-control-plane\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 08 Mar 2020 14:42:14 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 08 Mar 2020 15:30:41 +0000 Sun, 08 Mar 2020 14:42:09 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 08 Mar 2020 15:30:41 +0000 Sun, 08 Mar 2020 14:42:09 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 08 Mar 2020 15:30:41 +0000 Sun, 08 Mar 2020 14:42:09 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 08 Mar 2020 15:30:41 +0000 Sun, 08 Mar 2020 14:42:44 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.13\n Hostname: hunter-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nSystem Info:\n Machine ID: 2a4329de41344349b36017b3052d3f96\n System UUID: b0983dfc-866e-4257-9f60-ab0b470ce9b2\n Boot ID: 3de0b5b8-8b8f-48d3-9705-cabccc881bdb\n Kernel Version: 4.4.0-142-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.13.12\n Kube-Proxy Version: v1.13.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-54ff9cd656-4gmwj 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 48m\n kube-system coredns-54ff9cd656-jp8ll 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 48m\n kube-system etcd-hunter-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 47m\n kube-system kindnet-gd8fq 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 48m\n kube-system kube-apiserver-hunter-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 47m\n kube-system kube-controller-manager-hunter-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 47m\n kube-system kube-proxy-75z28 0 (0%) 0 (0%) 0 (0%) 0 (0%) 48m\n kube-system kube-scheduler-hunter-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 47m\n local-path-storage local-path-provisioner-77cfdd744c-mrm9p 0 (0%) 0 (0%) 0 (0%) 0 (0%) 48m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 48m kubelet, hunter-control-plane Starting kubelet.\n Normal NodeHasSufficientMemory 48m (x8 over 48m) kubelet, hunter-control-plane Node hunter-control-plane status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 48m (x8 over 48m) kubelet, hunter-control-plane Node hunter-control-plane status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 48m (x7 over 48m) kubelet, hunter-control-plane Node hunter-control-plane status is now: NodeHasSufficientPID\n Normal NodeAllocatableEnforced 48m kubelet, hunter-control-plane Updated Node Allocatable limit across pods\n Warning readOnlySysFS 48m kube-proxy, hunter-control-plane DOCKER RESTART NEEDED (docker issue #24000): /sys is read-only: cannot modify conntrack limits, problems may arise later.\n Normal Starting 48m kube-proxy, hunter-control-plane Starting kube-proxy.\n" Mar 8 15:30:47.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-9rjhq' Mar 8 15:30:47.630: INFO: stderr: "" Mar 8 15:30:47.630: INFO: stdout: "Name: e2e-tests-kubectl-9rjhq\nLabels: e2e-framework=kubectl\n e2e-run=6cdfd12c-614b-11ea-b38e-0242ac11000f\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:30:47.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-9rjhq" for this suite. Mar 8 15:31:09.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:31:09.725: INFO: namespace: e2e-tests-kubectl-9rjhq, resource: bindings, ignored listing per whitelist Mar 8 15:31:09.744: INFO: namespace e2e-tests-kubectl-9rjhq deletion completed in 22.111957928s • [SLOW TEST:25.402 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:31:09.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 8 15:31:09.840: INFO: Waiting up to 5m0s for pod "pod-d5f91853-6151-11ea-b38e-0242ac11000f" in namespace "e2e-tests-emptydir-5jhvw" to be "success or failure" Mar 8 15:31:09.859: INFO: Pod "pod-d5f91853-6151-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.151458ms Mar 8 15:31:11.887: INFO: Pod "pod-d5f91853-6151-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046791139s Mar 8 15:31:13.898: INFO: Pod "pod-d5f91853-6151-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05860756s STEP: Saw pod success Mar 8 15:31:13.898: INFO: Pod "pod-d5f91853-6151-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:31:13.902: INFO: Trying to get logs from node hunter-worker2 pod pod-d5f91853-6151-11ea-b38e-0242ac11000f container test-container: STEP: delete the pod Mar 8 15:31:13.938: INFO: Waiting for pod pod-d5f91853-6151-11ea-b38e-0242ac11000f to disappear Mar 8 15:31:13.945: INFO: Pod pod-d5f91853-6151-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:31:13.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-5jhvw" for this suite. Mar 8 15:31:19.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:31:20.025: INFO: namespace: e2e-tests-emptydir-5jhvw, resource: bindings, ignored listing per whitelist Mar 8 15:31:20.059: INFO: namespace e2e-tests-emptydir-5jhvw deletion completed in 6.107030183s • [SLOW TEST:10.314 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:31:20.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 8 15:31:20.171: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc2152e0-6151-11ea-b38e-0242ac11000f" in namespace "e2e-tests-downward-api-l6n9d" to be "success or failure" Mar 8 15:31:20.186: INFO: Pod "downwardapi-volume-dc2152e0-6151-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.610849ms Mar 8 15:31:22.189: INFO: Pod "downwardapi-volume-dc2152e0-6151-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017900306s STEP: Saw pod success Mar 8 15:31:22.189: INFO: Pod "downwardapi-volume-dc2152e0-6151-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:31:22.192: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-dc2152e0-6151-11ea-b38e-0242ac11000f container client-container: STEP: delete the pod Mar 8 15:31:22.226: INFO: Waiting for pod downwardapi-volume-dc2152e0-6151-11ea-b38e-0242ac11000f to disappear Mar 8 15:31:22.232: INFO: Pod downwardapi-volume-dc2152e0-6151-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:31:22.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-l6n9d" for this suite. Mar 8 15:31:28.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:31:28.274: INFO: namespace: e2e-tests-downward-api-l6n9d, resource: bindings, ignored listing per whitelist Mar 8 15:31:28.326: INFO: namespace e2e-tests-downward-api-l6n9d deletion completed in 6.090392551s • [SLOW TEST:8.267 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:31:28.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 8 15:31:28.425: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 8 15:31:28.458: INFO: Number of nodes with available pods: 0 Mar 8 15:31:28.458: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 8 15:31:28.492: INFO: Number of nodes with available pods: 0 Mar 8 15:31:28.492: INFO: Node hunter-worker is running more than one daemon pod Mar 8 15:31:29.496: INFO: Number of nodes with available pods: 0 Mar 8 15:31:29.496: INFO: Node hunter-worker is running more than one daemon pod Mar 8 15:31:30.496: INFO: Number of nodes with available pods: 1 Mar 8 15:31:30.496: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 8 15:31:30.526: INFO: Number of nodes with available pods: 1 Mar 8 15:31:30.526: INFO: Number of running nodes: 0, number of available pods: 1 Mar 8 15:31:31.530: INFO: Number of nodes with available pods: 0 Mar 8 15:31:31.530: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 8 15:31:31.563: INFO: Number of nodes with available pods: 0 Mar 8 15:31:31.563: INFO: Node hunter-worker is running more than one daemon pod Mar 8 15:31:32.568: INFO: Number of nodes with available pods: 0 Mar 8 15:31:32.568: INFO: Node hunter-worker is running more than one daemon pod Mar 8 15:31:33.567: INFO: Number of nodes with available pods: 0 Mar 8 15:31:33.567: INFO: Node hunter-worker is running more than one daemon pod Mar 8 15:31:34.567: INFO: Number of nodes with available pods: 0 Mar 8 15:31:34.567: INFO: Node hunter-worker is running more than one daemon pod Mar 8 15:31:35.567: INFO: Number of nodes with available pods: 0 Mar 8 15:31:35.567: INFO: Node hunter-worker is running more than one daemon pod Mar 8 15:31:36.566: INFO: Number of nodes with available pods: 1 Mar 8 15:31:36.566: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-bh9ht, will wait for the garbage collector to delete the pods Mar 8 15:31:36.661: INFO: Deleting DaemonSet.extensions daemon-set took: 38.553616ms Mar 8 15:31:36.761: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.191306ms Mar 8 15:31:39.964: INFO: Number of nodes with available pods: 0 Mar 8 15:31:39.964: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 15:31:39.966: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-bh9ht/daemonsets","resourceVersion":"12532"},"items":null} Mar 8 15:31:39.968: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-bh9ht/pods","resourceVersion":"12532"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:31:40.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-bh9ht" for this suite. Mar 8 15:31:46.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:31:46.072: INFO: namespace: e2e-tests-daemonsets-bh9ht, resource: bindings, ignored listing per whitelist Mar 8 15:31:46.135: INFO: namespace e2e-tests-daemonsets-bh9ht deletion completed in 6.117988356s • [SLOW TEST:17.809 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:31:46.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 8 15:31:46.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-lgcpw' Mar 8 15:31:46.304: INFO: stderr: "" Mar 8 15:31:46.304: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Mar 8 15:31:51.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-lgcpw -o json' Mar 8 15:31:51.462: INFO: stderr: "" Mar 8 15:31:51.462: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-08T15:31:46Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-lgcpw\",\n \"resourceVersion\": \"12579\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-lgcpw/pods/e2e-test-nginx-pod\",\n \"uid\": \"ebb4a637-6151-11ea-9978-0242ac11000d\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-rgdgc\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-rgdgc\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-rgdgc\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T15:31:46Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T15:31:47Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T15:31:47Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-08T15:31:46Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://7107877f60961131912d979aa149d9a0e9db45e51038b62e389fe064a002fc87\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-08T15:31:47Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.107\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-08T15:31:46Z\"\n }\n}\n" STEP: replace the image in the pod Mar 8 15:31:51.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-lgcpw' Mar 8 15:31:51.742: INFO: stderr: "" Mar 8 15:31:51.742: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Mar 8 15:31:51.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-lgcpw' Mar 8 15:31:54.015: INFO: stderr: "" Mar 8 15:31:54.015: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:31:54.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lgcpw" for this suite. Mar 8 15:32:00.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:32:00.069: INFO: namespace: e2e-tests-kubectl-lgcpw, resource: bindings, ignored listing per whitelist Mar 8 15:32:00.136: INFO: namespace e2e-tests-kubectl-lgcpw deletion completed in 6.117943992s • [SLOW TEST:14.000 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:32:00.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 8 15:32:00.223: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f4009f49-6151-11ea-b38e-0242ac11000f" in namespace "e2e-tests-downward-api-2twpg" to be "success or failure" Mar 8 15:32:00.245: INFO: Pod "downwardapi-volume-f4009f49-6151-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.709809ms Mar 8 15:32:02.249: INFO: Pod "downwardapi-volume-f4009f49-6151-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.025460754s STEP: Saw pod success Mar 8 15:32:02.249: INFO: Pod "downwardapi-volume-f4009f49-6151-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:32:02.252: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-f4009f49-6151-11ea-b38e-0242ac11000f container client-container: STEP: delete the pod Mar 8 15:32:02.266: INFO: Waiting for pod downwardapi-volume-f4009f49-6151-11ea-b38e-0242ac11000f to disappear Mar 8 15:32:02.271: INFO: Pod downwardapi-volume-f4009f49-6151-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:32:02.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-2twpg" for this suite. Mar 8 15:32:08.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:32:08.352: INFO: namespace: e2e-tests-downward-api-2twpg, resource: bindings, ignored listing per whitelist Mar 8 15:32:08.379: INFO: namespace e2e-tests-downward-api-2twpg deletion completed in 6.105824742s • [SLOW TEST:8.243 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:32:08.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-qghln/configmap-test-f8f1a858-6151-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 8 15:32:08.518: INFO: Waiting up to 5m0s for pod "pod-configmaps-f8f280ac-6151-11ea-b38e-0242ac11000f" in namespace "e2e-tests-configmap-qghln" to be "success or failure" Mar 8 15:32:08.522: INFO: Pod "pod-configmaps-f8f280ac-6151-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030854ms Mar 8 15:32:10.527: INFO: Pod "pod-configmaps-f8f280ac-6151-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008201449s STEP: Saw pod success Mar 8 15:32:10.527: INFO: Pod "pod-configmaps-f8f280ac-6151-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:32:10.529: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-f8f280ac-6151-11ea-b38e-0242ac11000f container env-test: STEP: delete the pod Mar 8 15:32:10.548: INFO: Waiting for pod pod-configmaps-f8f280ac-6151-11ea-b38e-0242ac11000f to disappear Mar 8 15:32:10.553: INFO: Pod pod-configmaps-f8f280ac-6151-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:32:10.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qghln" for this suite. Mar 8 15:32:16.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:32:16.618: INFO: namespace: e2e-tests-configmap-qghln, resource: bindings, ignored listing per whitelist Mar 8 15:32:16.642: INFO: namespace e2e-tests-configmap-qghln deletion completed in 6.085529897s • [SLOW TEST:8.262 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:32:16.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 8 15:32:19.298: INFO: Successfully updated pod "pod-update-activedeadlineseconds-fddc04d6-6151-11ea-b38e-0242ac11000f" Mar 8 15:32:19.298: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-fddc04d6-6151-11ea-b38e-0242ac11000f" in namespace "e2e-tests-pods-dsjww" to be "terminated due to deadline exceeded" Mar 8 15:32:19.323: INFO: Pod "pod-update-activedeadlineseconds-fddc04d6-6151-11ea-b38e-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 25.792571ms Mar 8 15:32:21.327: INFO: Pod "pod-update-activedeadlineseconds-fddc04d6-6151-11ea-b38e-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 2.029717415s Mar 8 15:32:23.331: INFO: Pod "pod-update-activedeadlineseconds-fddc04d6-6151-11ea-b38e-0242ac11000f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.0331649s Mar 8 15:32:23.331: INFO: Pod "pod-update-activedeadlineseconds-fddc04d6-6151-11ea-b38e-0242ac11000f" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:32:23.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-dsjww" for this suite. Mar 8 15:32:29.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:32:29.430: INFO: namespace: e2e-tests-pods-dsjww, resource: bindings, ignored listing per whitelist Mar 8 15:32:29.440: INFO: namespace e2e-tests-pods-dsjww deletion completed in 6.106582627s • [SLOW TEST:12.798 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:32:29.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-05795418-6152-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 8 15:32:29.544: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-057b496e-6152-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-wp62q" to be "success or failure" Mar 8 15:32:29.551: INFO: Pod "pod-projected-configmaps-057b496e-6152-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.900247ms Mar 8 15:32:31.564: INFO: Pod "pod-projected-configmaps-057b496e-6152-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020011419s STEP: Saw pod success Mar 8 15:32:31.564: INFO: Pod "pod-projected-configmaps-057b496e-6152-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:32:31.567: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-057b496e-6152-11ea-b38e-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 8 15:32:31.598: INFO: Waiting for pod pod-projected-configmaps-057b496e-6152-11ea-b38e-0242ac11000f to disappear Mar 8 15:32:31.607: INFO: Pod pod-projected-configmaps-057b496e-6152-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:32:31.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wp62q" for this suite. Mar 8 15:32:37.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:32:37.652: INFO: namespace: e2e-tests-projected-wp62q, resource: bindings, ignored listing per whitelist Mar 8 15:32:37.689: INFO: namespace e2e-tests-projected-wp62q deletion completed in 6.078824587s • [SLOW TEST:8.249 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:32:37.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 8 15:32:37.775: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 8 15:32:42.779: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 8 15:32:42.779: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 8 15:32:42.799: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-48m8j,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-48m8j/deployments/test-cleanup-deployment,UID:0d606fcf-6152-11ea-9978-0242ac11000d,ResourceVersion:12830,Generation:1,CreationTimestamp:2020-03-08 15:32:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Mar 8 15:32:42.805: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Mar 8 15:32:42.805: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 8 15:32:42.805: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-48m8j,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-48m8j/replicasets/test-cleanup-controller,UID:0a60e66c-6152-11ea-9978-0242ac11000d,ResourceVersion:12831,Generation:1,CreationTimestamp:2020-03-08 15:32:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 0d606fcf-6152-11ea-9978-0242ac11000d 0xc001f0f807 0xc001f0f808}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 8 15:32:42.893: INFO: Pod "test-cleanup-controller-bbbl8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-bbbl8,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-48m8j,SelfLink:/api/v1/namespaces/e2e-tests-deployment-48m8j/pods/test-cleanup-controller-bbbl8,UID:0a6464b9-6152-11ea-9978-0242ac11000d,ResourceVersion:12821,Generation:0,CreationTimestamp:2020-03-08 15:32:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 0a60e66c-6152-11ea-9978-0242ac11000d 0xc001f0fe87 0xc001f0fe88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-5rn2r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5rn2r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5rn2r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f0ff00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f0ff20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:32:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:32:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:32:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:32:37 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.11,PodIP:10.244.2.91,StartTime:2020-03-08 15:32:37 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-03-08 15:32:39 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1fac74f7b59beb18d1c7116253d6fe2d6eac5ef4ba384f52b9a60cea6310311a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:32:42.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-48m8j" for this suite. Mar 8 15:32:49.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:32:49.208: INFO: namespace: e2e-tests-deployment-48m8j, resource: bindings, ignored listing per whitelist Mar 8 15:32:49.214: INFO: namespace e2e-tests-deployment-48m8j deletion completed in 6.205997569s • [SLOW TEST:11.524 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:32:49.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name projected-secret-test-113f0159-6152-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume secrets Mar 8 15:32:49.319: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-11445a29-6152-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-btl8q" to be "success or failure" Mar 8 15:32:49.324: INFO: Pod "pod-projected-secrets-11445a29-6152-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191324ms Mar 8 15:32:51.326: INFO: Pod "pod-projected-secrets-11445a29-6152-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006992774s STEP: Saw pod success Mar 8 15:32:51.326: INFO: Pod "pod-projected-secrets-11445a29-6152-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:32:51.328: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-11445a29-6152-11ea-b38e-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 8 15:32:51.345: INFO: Waiting for pod pod-projected-secrets-11445a29-6152-11ea-b38e-0242ac11000f to disappear Mar 8 15:32:51.355: INFO: Pod pod-projected-secrets-11445a29-6152-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:32:51.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-btl8q" for this suite. Mar 8 15:32:57.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:32:57.457: INFO: namespace: e2e-tests-projected-btl8q, resource: bindings, ignored listing per whitelist Mar 8 15:32:57.482: INFO: namespace e2e-tests-projected-btl8q deletion completed in 6.123632678s • [SLOW TEST:8.268 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:32:57.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating server pod server in namespace e2e-tests-prestop-qxfkg STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace e2e-tests-prestop-qxfkg STEP: Deleting pre-stop pod Mar 8 15:33:10.662: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:33:10.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-prestop-qxfkg" for this suite. Mar 8 15:33:48.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:33:48.752: INFO: namespace: e2e-tests-prestop-qxfkg, resource: bindings, ignored listing per whitelist Mar 8 15:33:48.782: INFO: namespace e2e-tests-prestop-qxfkg deletion completed in 38.110692679s • [SLOW TEST:51.300 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:33:48.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-34c76109-6152-11ea-b38e-0242ac11000f STEP: Creating configMap with name cm-test-opt-upd-34c76194-6152-11ea-b38e-0242ac11000f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-34c76109-6152-11ea-b38e-0242ac11000f STEP: Updating configmap cm-test-opt-upd-34c76194-6152-11ea-b38e-0242ac11000f STEP: Creating configMap with name cm-test-opt-create-34c761ca-6152-11ea-b38e-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:35:23.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gkkjs" for this suite. Mar 8 15:35:45.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:35:45.519: INFO: namespace: e2e-tests-projected-gkkjs, resource: bindings, ignored listing per whitelist Mar 8 15:35:45.543: INFO: namespace e2e-tests-projected-gkkjs deletion completed in 22.108578324s • [SLOW TEST:116.760 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:35:45.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override all Mar 8 15:35:45.671: INFO: Waiting up to 5m0s for pod "client-containers-7a5cf8ab-6152-11ea-b38e-0242ac11000f" in namespace "e2e-tests-containers-pp5wk" to be "success or failure" Mar 8 15:35:45.690: INFO: Pod "client-containers-7a5cf8ab-6152-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.484768ms Mar 8 15:35:47.704: INFO: Pod "client-containers-7a5cf8ab-6152-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.033573687s STEP: Saw pod success Mar 8 15:35:47.704: INFO: Pod "client-containers-7a5cf8ab-6152-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:35:47.707: INFO: Trying to get logs from node hunter-worker pod client-containers-7a5cf8ab-6152-11ea-b38e-0242ac11000f container test-container: STEP: delete the pod Mar 8 15:35:47.723: INFO: Waiting for pod client-containers-7a5cf8ab-6152-11ea-b38e-0242ac11000f to disappear Mar 8 15:35:47.738: INFO: Pod client-containers-7a5cf8ab-6152-11ea-b38e-0242ac11000f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:35:47.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-pp5wk" for this suite. Mar 8 15:35:53.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:35:53.835: INFO: namespace: e2e-tests-containers-pp5wk, resource: bindings, ignored listing per whitelist Mar 8 15:35:53.835: INFO: namespace e2e-tests-containers-pp5wk deletion completed in 6.09397915s • [SLOW TEST:8.292 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:35:53.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0308 15:36:04.055458 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 15:36:04.055: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:36:04.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-99fpf" for this suite. Mar 8 15:36:12.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:36:12.129: INFO: namespace: e2e-tests-gc-99fpf, resource: bindings, ignored listing per whitelist Mar 8 15:36:12.153: INFO: namespace e2e-tests-gc-99fpf deletion completed in 8.095982771s • [SLOW TEST:18.318 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:36:12.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-8a3a34ae-6152-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume secrets Mar 8 15:36:12.260: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8a3aa53a-6152-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-jnd5j" to be "success or failure" Mar 8 15:36:12.287: INFO: Pod "pod-projected-secrets-8a3aa53a-6152-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 26.984053ms Mar 8 15:36:14.291: INFO: Pod "pod-projected-secrets-8a3aa53a-6152-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030909044s Mar 8 15:36:16.295: INFO: Pod "pod-projected-secrets-8a3aa53a-6152-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034538315s STEP: Saw pod success Mar 8 15:36:16.295: INFO: Pod "pod-projected-secrets-8a3aa53a-6152-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:36:16.297: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-8a3aa53a-6152-11ea-b38e-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Mar 8 15:36:16.333: INFO: Waiting for pod pod-projected-secrets-8a3aa53a-6152-11ea-b38e-0242ac11000f to disappear Mar 8 15:36:16.340: INFO: Pod pod-projected-secrets-8a3aa53a-6152-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:36:16.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-jnd5j" for this suite. Mar 8 15:36:22.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:36:22.393: INFO: namespace: e2e-tests-projected-jnd5j, resource: bindings, ignored listing per whitelist Mar 8 15:36:22.469: INFO: namespace e2e-tests-projected-jnd5j deletion completed in 6.126435843s • [SLOW TEST:10.316 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:36:22.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 8 15:36:22.575: INFO: Waiting up to 5m0s for pod "downward-api-9060ff55-6152-11ea-b38e-0242ac11000f" in namespace "e2e-tests-downward-api-hdhxs" to be "success or failure" Mar 8 15:36:22.598: INFO: Pod "downward-api-9060ff55-6152-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.471646ms Mar 8 15:36:24.602: INFO: Pod "downward-api-9060ff55-6152-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02660682s STEP: Saw pod success Mar 8 15:36:24.602: INFO: Pod "downward-api-9060ff55-6152-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:36:24.605: INFO: Trying to get logs from node hunter-worker pod downward-api-9060ff55-6152-11ea-b38e-0242ac11000f container dapi-container: STEP: delete the pod Mar 8 15:36:24.648: INFO: Waiting for pod downward-api-9060ff55-6152-11ea-b38e-0242ac11000f to disappear Mar 8 15:36:24.657: INFO: Pod downward-api-9060ff55-6152-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:36:24.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hdhxs" for this suite. Mar 8 15:36:30.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:36:30.736: INFO: namespace: e2e-tests-downward-api-hdhxs, resource: bindings, ignored listing per whitelist Mar 8 15:36:30.773: INFO: namespace e2e-tests-downward-api-hdhxs deletion completed in 6.109065277s • [SLOW TEST:8.304 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:36:30.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-955dcad5-6152-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume secrets Mar 8 15:36:31.039: INFO: Waiting up to 5m0s for pod "pod-secrets-956c784b-6152-11ea-b38e-0242ac11000f" in namespace "e2e-tests-secrets-rq9bs" to be "success or failure" Mar 8 15:36:31.045: INFO: Pod "pod-secrets-956c784b-6152-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.995892ms Mar 8 15:36:33.048: INFO: Pod "pod-secrets-956c784b-6152-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008658959s Mar 8 15:36:35.052: INFO: Pod "pod-secrets-956c784b-6152-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012686338s STEP: Saw pod success Mar 8 15:36:35.052: INFO: Pod "pod-secrets-956c784b-6152-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:36:35.055: INFO: Trying to get logs from node hunter-worker pod pod-secrets-956c784b-6152-11ea-b38e-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 8 15:36:35.076: INFO: Waiting for pod pod-secrets-956c784b-6152-11ea-b38e-0242ac11000f to disappear Mar 8 15:36:35.080: INFO: Pod pod-secrets-956c784b-6152-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:36:35.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-rq9bs" for this suite. Mar 8 15:36:41.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:36:41.188: INFO: namespace: e2e-tests-secrets-rq9bs, resource: bindings, ignored listing per whitelist Mar 8 15:36:41.198: INFO: namespace e2e-tests-secrets-rq9bs deletion completed in 6.114516725s • [SLOW TEST:10.425 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:36:41.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-7nlp STEP: Creating a pod to test atomic-volume-subpath Mar 8 15:36:41.334: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-7nlp" in namespace "e2e-tests-subpath-8zrtq" to be "success or failure" Mar 8 15:36:41.362: INFO: Pod "pod-subpath-test-downwardapi-7nlp": Phase="Pending", Reason="", readiness=false. Elapsed: 27.746912ms Mar 8 15:36:43.365: INFO: Pod "pod-subpath-test-downwardapi-7nlp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031317245s Mar 8 15:36:45.370: INFO: Pod "pod-subpath-test-downwardapi-7nlp": Phase="Running", Reason="", readiness=false. Elapsed: 4.035855698s Mar 8 15:36:47.374: INFO: Pod "pod-subpath-test-downwardapi-7nlp": Phase="Running", Reason="", readiness=false. Elapsed: 6.039830846s Mar 8 15:36:49.377: INFO: Pod "pod-subpath-test-downwardapi-7nlp": Phase="Running", Reason="", readiness=false. Elapsed: 8.042908873s Mar 8 15:36:51.381: INFO: Pod "pod-subpath-test-downwardapi-7nlp": Phase="Running", Reason="", readiness=false. Elapsed: 10.046597235s Mar 8 15:36:53.385: INFO: Pod "pod-subpath-test-downwardapi-7nlp": Phase="Running", Reason="", readiness=false. Elapsed: 12.050675148s Mar 8 15:36:55.388: INFO: Pod "pod-subpath-test-downwardapi-7nlp": Phase="Running", Reason="", readiness=false. Elapsed: 14.054427545s Mar 8 15:36:57.391: INFO: Pod "pod-subpath-test-downwardapi-7nlp": Phase="Running", Reason="", readiness=false. Elapsed: 16.057259381s Mar 8 15:36:59.394: INFO: Pod "pod-subpath-test-downwardapi-7nlp": Phase="Running", Reason="", readiness=false. Elapsed: 18.059794315s Mar 8 15:37:01.398: INFO: Pod "pod-subpath-test-downwardapi-7nlp": Phase="Running", Reason="", readiness=false. Elapsed: 20.064367254s Mar 8 15:37:03.402: INFO: Pod "pod-subpath-test-downwardapi-7nlp": Phase="Running", Reason="", readiness=false. Elapsed: 22.067608626s Mar 8 15:37:05.406: INFO: Pod "pod-subpath-test-downwardapi-7nlp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.071778779s STEP: Saw pod success Mar 8 15:37:05.406: INFO: Pod "pod-subpath-test-downwardapi-7nlp" satisfied condition "success or failure" Mar 8 15:37:05.408: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-downwardapi-7nlp container test-container-subpath-downwardapi-7nlp: STEP: delete the pod Mar 8 15:37:05.438: INFO: Waiting for pod pod-subpath-test-downwardapi-7nlp to disappear Mar 8 15:37:05.467: INFO: Pod pod-subpath-test-downwardapi-7nlp no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-7nlp Mar 8 15:37:05.467: INFO: Deleting pod "pod-subpath-test-downwardapi-7nlp" in namespace "e2e-tests-subpath-8zrtq" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:37:05.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-8zrtq" for this suite. Mar 8 15:37:11.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:37:11.486: INFO: namespace: e2e-tests-subpath-8zrtq, resource: bindings, ignored listing per whitelist Mar 8 15:37:11.669: INFO: namespace e2e-tests-subpath-8zrtq deletion completed in 6.197775809s • [SLOW TEST:30.471 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:37:11.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-pkfkm [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a new StaefulSet Mar 8 15:37:11.911: INFO: Found 0 stateful pods, waiting for 3 Mar 8 15:37:21.915: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 15:37:21.915: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 15:37:21.915: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Mar 8 15:37:21.942: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 8 15:37:32.087: INFO: Updating stateful set ss2 Mar 8 15:37:32.159: INFO: Waiting for Pod e2e-tests-statefulset-pkfkm/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Mar 8 15:37:42.237: INFO: Found 1 stateful pods, waiting for 3 Mar 8 15:37:53.038: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 8 15:37:53.038: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 8 15:37:53.038: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 8 15:37:53.087: INFO: Updating stateful set ss2 Mar 8 15:37:53.177: INFO: Waiting for Pod e2e-tests-statefulset-pkfkm/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Mar 8 15:38:03.233: INFO: Updating stateful set ss2 Mar 8 15:38:03.242: INFO: Waiting for StatefulSet e2e-tests-statefulset-pkfkm/ss2 to complete update Mar 8 15:38:03.242: INFO: Waiting for Pod e2e-tests-statefulset-pkfkm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 8 15:38:13.869: INFO: Deleting all statefulset in ns e2e-tests-statefulset-pkfkm Mar 8 15:38:13.879: INFO: Scaling statefulset ss2 to 0 Mar 8 15:38:43.896: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 15:38:43.899: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:38:43.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-pkfkm" for this suite. Mar 8 15:38:51.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:38:51.976: INFO: namespace: e2e-tests-statefulset-pkfkm, resource: bindings, ignored listing per whitelist Mar 8 15:38:52.026: INFO: namespace e2e-tests-statefulset-pkfkm deletion completed in 8.107891927s • [SLOW TEST:100.356 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:38:52.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Mar 8 15:38:52.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:38:52.450: INFO: stderr: "" Mar 8 15:38:52.450: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 15:38:52.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:38:52.570: INFO: stderr: "" Mar 8 15:38:52.570: INFO: stdout: "update-demo-nautilus-4lfdb update-demo-nautilus-p645v " Mar 8 15:38:52.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4lfdb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:38:52.644: INFO: stderr: "" Mar 8 15:38:52.645: INFO: stdout: "" Mar 8 15:38:52.645: INFO: update-demo-nautilus-4lfdb is created but not running Mar 8 15:38:57.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:38:57.734: INFO: stderr: "" Mar 8 15:38:57.734: INFO: stdout: "update-demo-nautilus-4lfdb update-demo-nautilus-p645v " Mar 8 15:38:57.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4lfdb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:38:57.829: INFO: stderr: "" Mar 8 15:38:57.829: INFO: stdout: "" Mar 8 15:38:57.829: INFO: update-demo-nautilus-4lfdb is created but not running Mar 8 15:39:02.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:02.907: INFO: stderr: "" Mar 8 15:39:02.907: INFO: stdout: "update-demo-nautilus-4lfdb update-demo-nautilus-p645v " Mar 8 15:39:02.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4lfdb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:03.027: INFO: stderr: "" Mar 8 15:39:03.027: INFO: stdout: "true" Mar 8 15:39:03.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4lfdb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:03.105: INFO: stderr: "" Mar 8 15:39:03.105: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 15:39:03.105: INFO: validating pod update-demo-nautilus-4lfdb Mar 8 15:39:03.108: INFO: got data: { "image": "nautilus.jpg" } Mar 8 15:39:03.108: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 15:39:03.108: INFO: update-demo-nautilus-4lfdb is verified up and running Mar 8 15:39:03.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p645v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:03.184: INFO: stderr: "" Mar 8 15:39:03.184: INFO: stdout: "true" Mar 8 15:39:03.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p645v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:03.258: INFO: stderr: "" Mar 8 15:39:03.259: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 15:39:03.259: INFO: validating pod update-demo-nautilus-p645v Mar 8 15:39:03.261: INFO: got data: { "image": "nautilus.jpg" } Mar 8 15:39:03.261: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 15:39:03.261: INFO: update-demo-nautilus-p645v is verified up and running STEP: scaling down the replication controller Mar 8 15:39:03.262: INFO: scanned /root for discovery docs: Mar 8 15:39:03.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:04.350: INFO: stderr: "" Mar 8 15:39:04.350: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 15:39:04.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:04.470: INFO: stderr: "" Mar 8 15:39:04.470: INFO: stdout: "update-demo-nautilus-4lfdb update-demo-nautilus-p645v " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 8 15:39:09.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:09.566: INFO: stderr: "" Mar 8 15:39:09.567: INFO: stdout: "update-demo-nautilus-4lfdb " Mar 8 15:39:09.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4lfdb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:09.648: INFO: stderr: "" Mar 8 15:39:09.648: INFO: stdout: "true" Mar 8 15:39:09.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4lfdb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:09.725: INFO: stderr: "" Mar 8 15:39:09.725: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 15:39:09.725: INFO: validating pod update-demo-nautilus-4lfdb Mar 8 15:39:09.727: INFO: got data: { "image": "nautilus.jpg" } Mar 8 15:39:09.727: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 15:39:09.727: INFO: update-demo-nautilus-4lfdb is verified up and running STEP: scaling up the replication controller Mar 8 15:39:09.728: INFO: scanned /root for discovery docs: Mar 8 15:39:09.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:10.846: INFO: stderr: "" Mar 8 15:39:10.846: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 15:39:10.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:10.950: INFO: stderr: "" Mar 8 15:39:10.951: INFO: stdout: "update-demo-nautilus-4lfdb update-demo-nautilus-bp8n8 " Mar 8 15:39:10.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4lfdb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:11.045: INFO: stderr: "" Mar 8 15:39:11.045: INFO: stdout: "true" Mar 8 15:39:11.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4lfdb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:11.115: INFO: stderr: "" Mar 8 15:39:11.115: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 15:39:11.115: INFO: validating pod update-demo-nautilus-4lfdb Mar 8 15:39:11.117: INFO: got data: { "image": "nautilus.jpg" } Mar 8 15:39:11.117: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 15:39:11.117: INFO: update-demo-nautilus-4lfdb is verified up and running Mar 8 15:39:11.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bp8n8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:11.185: INFO: stderr: "" Mar 8 15:39:11.185: INFO: stdout: "" Mar 8 15:39:11.185: INFO: update-demo-nautilus-bp8n8 is created but not running Mar 8 15:39:16.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:16.278: INFO: stderr: "" Mar 8 15:39:16.278: INFO: stdout: "update-demo-nautilus-4lfdb update-demo-nautilus-bp8n8 " Mar 8 15:39:16.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4lfdb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:16.343: INFO: stderr: "" Mar 8 15:39:16.344: INFO: stdout: "true" Mar 8 15:39:16.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4lfdb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:16.407: INFO: stderr: "" Mar 8 15:39:16.407: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 15:39:16.407: INFO: validating pod update-demo-nautilus-4lfdb Mar 8 15:39:16.410: INFO: got data: { "image": "nautilus.jpg" } Mar 8 15:39:16.410: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 15:39:16.410: INFO: update-demo-nautilus-4lfdb is verified up and running Mar 8 15:39:16.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bp8n8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:16.475: INFO: stderr: "" Mar 8 15:39:16.475: INFO: stdout: "true" Mar 8 15:39:16.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bp8n8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:16.540: INFO: stderr: "" Mar 8 15:39:16.540: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 15:39:16.540: INFO: validating pod update-demo-nautilus-bp8n8 Mar 8 15:39:16.543: INFO: got data: { "image": "nautilus.jpg" } Mar 8 15:39:16.543: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 15:39:16.543: INFO: update-demo-nautilus-bp8n8 is verified up and running STEP: using delete to clean up resources Mar 8 15:39:16.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:16.618: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 15:39:16.618: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 8 15:39:16.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-dn92k' Mar 8 15:39:16.695: INFO: stderr: "No resources found.\n" Mar 8 15:39:16.695: INFO: stdout: "" Mar 8 15:39:16.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-dn92k -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 15:39:16.766: INFO: stderr: "" Mar 8 15:39:16.766: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:39:16.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-dn92k" for this suite. Mar 8 15:39:22.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:39:22.806: INFO: namespace: e2e-tests-kubectl-dn92k, resource: bindings, ignored listing per whitelist Mar 8 15:39:22.851: INFO: namespace e2e-tests-kubectl-dn92k deletion completed in 6.082812071s • [SLOW TEST:30.825 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:39:22.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 8 15:39:22.919: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 8 15:39:22.928: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 8 15:39:27.933: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 8 15:39:27.933: INFO: Creating deployment "test-rolling-update-deployment" Mar 8 15:39:27.938: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 8 15:39:27.945: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 8 15:39:29.977: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 8 15:39:29.979: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Mar 8 15:39:29.987: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-2tz2q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2tz2q/deployments/test-rolling-update-deployment,UID:fedd52b9-6152-11ea-9978-0242ac11000d,ResourceVersion:14497,Generation:1,CreationTimestamp:2020-03-08 15:39:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-03-08 15:39:27 +0000 UTC 2020-03-08 15:39:27 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-08 15:39:29 +0000 UTC 2020-03-08 15:39:27 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Mar 8 15:39:29.990: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-2tz2q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2tz2q/replicasets/test-rolling-update-deployment-75db98fb4c,UID:fedfda95-6152-11ea-9978-0242ac11000d,ResourceVersion:14488,Generation:1,CreationTimestamp:2020-03-08 15:39:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment fedd52b9-6152-11ea-9978-0242ac11000d 0xc001a2f377 0xc001a2f378}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Mar 8 15:39:29.990: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 8 15:39:29.990: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-2tz2q,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-2tz2q/replicasets/test-rolling-update-controller,UID:fbe0188b-6152-11ea-9978-0242ac11000d,ResourceVersion:14496,Generation:2,CreationTimestamp:2020-03-08 15:39:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment fedd52b9-6152-11ea-9978-0242ac11000d 0xc001a2f207 0xc001a2f208}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Mar 8 15:39:29.992: INFO: Pod "test-rolling-update-deployment-75db98fb4c-28ddr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-28ddr,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-2tz2q,SelfLink:/api/v1/namespaces/e2e-tests-deployment-2tz2q/pods/test-rolling-update-deployment-75db98fb4c-28ddr,UID:fee06508-6152-11ea-9978-0242ac11000d,ResourceVersion:14487,Generation:0,CreationTimestamp:2020-03-08 15:39:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c fedfda95-6152-11ea-9978-0242ac11000d 0xc001a2ffc7 0xc001a2ffc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-x2bxw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-x2bxw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-x2bxw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e42040} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e42200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:39:27 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:39:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:39:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-08 15:39:27 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.125,StartTime:2020-03-08 15:39:27 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-03-08 15:39:29 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://9668056c9ccbf49666cc70a6196708b43a265695c777e227b8d5ab2164ca6b32}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:39:29.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-2tz2q" for this suite. Mar 8 15:39:36.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:39:36.033: INFO: namespace: e2e-tests-deployment-2tz2q, resource: bindings, ignored listing per whitelist Mar 8 15:39:36.083: INFO: namespace e2e-tests-deployment-2tz2q deletion completed in 6.088262706s • [SLOW TEST:13.233 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:39:36.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 8 15:39:36.165: INFO: Waiting up to 5m0s for pod "downwardapi-volume-03c41ee7-6153-11ea-b38e-0242ac11000f" in namespace "e2e-tests-downward-api-8dzkg" to be "success or failure" Mar 8 15:39:36.168: INFO: Pod "downwardapi-volume-03c41ee7-6153-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.349143ms Mar 8 15:39:38.171: INFO: Pod "downwardapi-volume-03c41ee7-6153-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006609878s STEP: Saw pod success Mar 8 15:39:38.172: INFO: Pod "downwardapi-volume-03c41ee7-6153-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:39:38.174: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-03c41ee7-6153-11ea-b38e-0242ac11000f container client-container: STEP: delete the pod Mar 8 15:39:38.193: INFO: Waiting for pod downwardapi-volume-03c41ee7-6153-11ea-b38e-0242ac11000f to disappear Mar 8 15:39:38.210: INFO: Pod downwardapi-volume-03c41ee7-6153-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:39:38.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-8dzkg" for this suite. Mar 8 15:39:44.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:39:44.300: INFO: namespace: e2e-tests-downward-api-8dzkg, resource: bindings, ignored listing per whitelist Mar 8 15:39:44.334: INFO: namespace e2e-tests-downward-api-8dzkg deletion completed in 6.121080839s • [SLOW TEST:8.250 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:39:44.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 8 15:39:44.413: INFO: Waiting up to 5m0s for pod "downwardapi-volume-08ac6e08-6153-11ea-b38e-0242ac11000f" in namespace "e2e-tests-downward-api-dng7t" to be "success or failure" Mar 8 15:39:44.415: INFO: Pod "downwardapi-volume-08ac6e08-6153-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.616677ms Mar 8 15:39:46.420: INFO: Pod "downwardapi-volume-08ac6e08-6153-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007540534s Mar 8 15:39:48.424: INFO: Pod "downwardapi-volume-08ac6e08-6153-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011727989s STEP: Saw pod success Mar 8 15:39:48.424: INFO: Pod "downwardapi-volume-08ac6e08-6153-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:39:48.427: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-08ac6e08-6153-11ea-b38e-0242ac11000f container client-container: STEP: delete the pod Mar 8 15:39:48.468: INFO: Waiting for pod downwardapi-volume-08ac6e08-6153-11ea-b38e-0242ac11000f to disappear Mar 8 15:39:48.475: INFO: Pod downwardapi-volume-08ac6e08-6153-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:39:48.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-dng7t" for this suite. Mar 8 15:39:54.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:39:54.527: INFO: namespace: e2e-tests-downward-api-dng7t, resource: bindings, ignored listing per whitelist Mar 8 15:39:54.563: INFO: namespace e2e-tests-downward-api-dng7t deletion completed in 6.084662802s • [SLOW TEST:10.229 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:39:54.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-0ec89209-6153-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume secrets Mar 8 15:39:54.663: INFO: Waiting up to 5m0s for pod "pod-secrets-0eca8d7d-6153-11ea-b38e-0242ac11000f" in namespace "e2e-tests-secrets-5wv2k" to be "success or failure" Mar 8 15:39:54.667: INFO: Pod "pod-secrets-0eca8d7d-6153-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.963203ms Mar 8 15:39:56.671: INFO: Pod "pod-secrets-0eca8d7d-6153-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008569339s Mar 8 15:39:58.676: INFO: Pod "pod-secrets-0eca8d7d-6153-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013318898s STEP: Saw pod success Mar 8 15:39:58.676: INFO: Pod "pod-secrets-0eca8d7d-6153-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:39:58.683: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-0eca8d7d-6153-11ea-b38e-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 8 15:39:58.705: INFO: Waiting for pod pod-secrets-0eca8d7d-6153-11ea-b38e-0242ac11000f to disappear Mar 8 15:39:58.709: INFO: Pod pod-secrets-0eca8d7d-6153-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:39:58.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-5wv2k" for this suite. Mar 8 15:40:04.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:40:04.777: INFO: namespace: e2e-tests-secrets-5wv2k, resource: bindings, ignored listing per whitelist Mar 8 15:40:04.829: INFO: namespace e2e-tests-secrets-5wv2k deletion completed in 6.115294327s • [SLOW TEST:10.265 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:40:04.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 8 15:40:04.957: INFO: Waiting up to 5m0s for pod "downward-api-14ec8bd7-6153-11ea-b38e-0242ac11000f" in namespace "e2e-tests-downward-api-hvj2k" to be "success or failure" Mar 8 15:40:04.979: INFO: Pod "downward-api-14ec8bd7-6153-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.317343ms Mar 8 15:40:06.983: INFO: Pod "downward-api-14ec8bd7-6153-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026363493s STEP: Saw pod success Mar 8 15:40:06.984: INFO: Pod "downward-api-14ec8bd7-6153-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:40:06.986: INFO: Trying to get logs from node hunter-worker2 pod downward-api-14ec8bd7-6153-11ea-b38e-0242ac11000f container dapi-container: STEP: delete the pod Mar 8 15:40:07.033: INFO: Waiting for pod downward-api-14ec8bd7-6153-11ea-b38e-0242ac11000f to disappear Mar 8 15:40:07.038: INFO: Pod downward-api-14ec8bd7-6153-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:40:07.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hvj2k" for this suite. Mar 8 15:40:13.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:40:13.114: INFO: namespace: e2e-tests-downward-api-hvj2k, resource: bindings, ignored listing per whitelist Mar 8 15:40:13.145: INFO: namespace e2e-tests-downward-api-hvj2k deletion completed in 6.103812547s • [SLOW TEST:8.316 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:40:13.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-b96pw [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-b96pw STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-b96pw STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-b96pw STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-b96pw Mar 8 15:40:15.399: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-b96pw, name: ss-0, uid: 1a1dc96b-6153-11ea-9978-0242ac11000d, status phase: Pending. Waiting for statefulset controller to delete. Mar 8 15:40:17.876: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-b96pw, name: ss-0, uid: 1a1dc96b-6153-11ea-9978-0242ac11000d, status phase: Failed. Waiting for statefulset controller to delete. Mar 8 15:40:17.895: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-b96pw, name: ss-0, uid: 1a1dc96b-6153-11ea-9978-0242ac11000d, status phase: Failed. Waiting for statefulset controller to delete. Mar 8 15:40:17.931: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-b96pw STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-b96pw STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-b96pw and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Mar 8 15:40:20.011: INFO: Deleting all statefulset in ns e2e-tests-statefulset-b96pw Mar 8 15:40:20.014: INFO: Scaling statefulset ss to 0 Mar 8 15:40:30.031: INFO: Waiting for statefulset status.replicas updated to 0 Mar 8 15:40:30.034: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:40:30.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-b96pw" for this suite. Mar 8 15:40:36.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:40:36.141: INFO: namespace: e2e-tests-statefulset-b96pw, resource: bindings, ignored listing per whitelist Mar 8 15:40:36.161: INFO: namespace e2e-tests-statefulset-b96pw deletion completed in 6.110695992s • [SLOW TEST:23.015 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:40:36.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Mar 8 15:40:36.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-k6kf8' Mar 8 15:40:37.960: INFO: stderr: "" Mar 8 15:40:37.961: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 15:40:37.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-k6kf8' Mar 8 15:40:38.062: INFO: stderr: "" Mar 8 15:40:38.062: INFO: stdout: "update-demo-nautilus-fvclw update-demo-nautilus-wblnb " Mar 8 15:40:38.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvclw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k6kf8' Mar 8 15:40:38.127: INFO: stderr: "" Mar 8 15:40:38.127: INFO: stdout: "" Mar 8 15:40:38.127: INFO: update-demo-nautilus-fvclw is created but not running Mar 8 15:40:43.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-k6kf8' Mar 8 15:40:43.247: INFO: stderr: "" Mar 8 15:40:43.247: INFO: stdout: "update-demo-nautilus-fvclw update-demo-nautilus-wblnb " Mar 8 15:40:43.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvclw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k6kf8' Mar 8 15:40:43.338: INFO: stderr: "" Mar 8 15:40:43.338: INFO: stdout: "true" Mar 8 15:40:43.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvclw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k6kf8' Mar 8 15:40:43.413: INFO: stderr: "" Mar 8 15:40:43.413: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 15:40:43.413: INFO: validating pod update-demo-nautilus-fvclw Mar 8 15:40:43.416: INFO: got data: { "image": "nautilus.jpg" } Mar 8 15:40:43.416: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 15:40:43.416: INFO: update-demo-nautilus-fvclw is verified up and running Mar 8 15:40:43.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wblnb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k6kf8' Mar 8 15:40:43.493: INFO: stderr: "" Mar 8 15:40:43.493: INFO: stdout: "true" Mar 8 15:40:43.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wblnb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k6kf8' Mar 8 15:40:43.567: INFO: stderr: "" Mar 8 15:40:43.567: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 15:40:43.567: INFO: validating pod update-demo-nautilus-wblnb Mar 8 15:40:43.570: INFO: got data: { "image": "nautilus.jpg" } Mar 8 15:40:43.570: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 15:40:43.570: INFO: update-demo-nautilus-wblnb is verified up and running STEP: using delete to clean up resources Mar 8 15:40:43.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-k6kf8' Mar 8 15:40:43.652: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 15:40:43.652: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 8 15:40:43.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-k6kf8' Mar 8 15:40:43.724: INFO: stderr: "No resources found.\n" Mar 8 15:40:43.724: INFO: stdout: "" Mar 8 15:40:43.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-k6kf8 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 15:40:43.796: INFO: stderr: "" Mar 8 15:40:43.796: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:40:43.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-k6kf8" for this suite. Mar 8 15:40:49.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:40:49.882: INFO: namespace: e2e-tests-kubectl-k6kf8, resource: bindings, ignored listing per whitelist Mar 8 15:40:49.908: INFO: namespace e2e-tests-kubectl-k6kf8 deletion completed in 6.109431294s • [SLOW TEST:13.747 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:40:49.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 8 15:40:50.003: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:40:52.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-vllg7" for this suite. Mar 8 15:41:00.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:41:00.823: INFO: namespace: e2e-tests-init-container-vllg7, resource: bindings, ignored listing per whitelist Mar 8 15:41:00.902: INFO: namespace e2e-tests-init-container-vllg7 deletion completed in 8.124605087s • [SLOW TEST:10.994 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:41:00.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 8 15:41:01.493: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-t4w6z,SelfLink:/api/v1/namespaces/e2e-tests-watch-t4w6z/configmaps/e2e-watch-test-configmap-a,UID:3694aa9b-6153-11ea-9978-0242ac11000d,ResourceVersion:14980,Generation:0,CreationTimestamp:2020-03-08 15:41:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 8 15:41:01.493: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-t4w6z,SelfLink:/api/v1/namespaces/e2e-tests-watch-t4w6z/configmaps/e2e-watch-test-configmap-a,UID:3694aa9b-6153-11ea-9978-0242ac11000d,ResourceVersion:14980,Generation:0,CreationTimestamp:2020-03-08 15:41:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 8 15:41:11.501: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-t4w6z,SelfLink:/api/v1/namespaces/e2e-tests-watch-t4w6z/configmaps/e2e-watch-test-configmap-a,UID:3694aa9b-6153-11ea-9978-0242ac11000d,ResourceVersion:15000,Generation:0,CreationTimestamp:2020-03-08 15:41:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 8 15:41:11.501: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-t4w6z,SelfLink:/api/v1/namespaces/e2e-tests-watch-t4w6z/configmaps/e2e-watch-test-configmap-a,UID:3694aa9b-6153-11ea-9978-0242ac11000d,ResourceVersion:15000,Generation:0,CreationTimestamp:2020-03-08 15:41:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 8 15:41:21.509: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-t4w6z,SelfLink:/api/v1/namespaces/e2e-tests-watch-t4w6z/configmaps/e2e-watch-test-configmap-a,UID:3694aa9b-6153-11ea-9978-0242ac11000d,ResourceVersion:15020,Generation:0,CreationTimestamp:2020-03-08 15:41:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 8 15:41:21.509: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-t4w6z,SelfLink:/api/v1/namespaces/e2e-tests-watch-t4w6z/configmaps/e2e-watch-test-configmap-a,UID:3694aa9b-6153-11ea-9978-0242ac11000d,ResourceVersion:15020,Generation:0,CreationTimestamp:2020-03-08 15:41:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 8 15:41:31.573: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-t4w6z,SelfLink:/api/v1/namespaces/e2e-tests-watch-t4w6z/configmaps/e2e-watch-test-configmap-a,UID:3694aa9b-6153-11ea-9978-0242ac11000d,ResourceVersion:15040,Generation:0,CreationTimestamp:2020-03-08 15:41:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 8 15:41:31.573: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-t4w6z,SelfLink:/api/v1/namespaces/e2e-tests-watch-t4w6z/configmaps/e2e-watch-test-configmap-a,UID:3694aa9b-6153-11ea-9978-0242ac11000d,ResourceVersion:15040,Generation:0,CreationTimestamp:2020-03-08 15:41:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 8 15:41:41.582: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-t4w6z,SelfLink:/api/v1/namespaces/e2e-tests-watch-t4w6z/configmaps/e2e-watch-test-configmap-b,UID:4e858e44-6153-11ea-9978-0242ac11000d,ResourceVersion:15059,Generation:0,CreationTimestamp:2020-03-08 15:41:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 8 15:41:41.582: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-t4w6z,SelfLink:/api/v1/namespaces/e2e-tests-watch-t4w6z/configmaps/e2e-watch-test-configmap-b,UID:4e858e44-6153-11ea-9978-0242ac11000d,ResourceVersion:15059,Generation:0,CreationTimestamp:2020-03-08 15:41:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 8 15:41:51.589: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-t4w6z,SelfLink:/api/v1/namespaces/e2e-tests-watch-t4w6z/configmaps/e2e-watch-test-configmap-b,UID:4e858e44-6153-11ea-9978-0242ac11000d,ResourceVersion:15079,Generation:0,CreationTimestamp:2020-03-08 15:41:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 8 15:41:51.589: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-t4w6z,SelfLink:/api/v1/namespaces/e2e-tests-watch-t4w6z/configmaps/e2e-watch-test-configmap-b,UID:4e858e44-6153-11ea-9978-0242ac11000d,ResourceVersion:15079,Generation:0,CreationTimestamp:2020-03-08 15:41:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:42:01.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-t4w6z" for this suite. Mar 8 15:42:07.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:42:07.653: INFO: namespace: e2e-tests-watch-t4w6z, resource: bindings, ignored listing per whitelist Mar 8 15:42:07.689: INFO: namespace e2e-tests-watch-t4w6z deletion completed in 6.094824072s • [SLOW TEST:66.787 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:42:07.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Mar 8 15:42:07.798: INFO: namespace e2e-tests-kubectl-ldm65 Mar 8 15:42:07.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ldm65' Mar 8 15:42:08.069: INFO: stderr: "" Mar 8 15:42:08.069: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Mar 8 15:42:09.074: INFO: Selector matched 1 pods for map[app:redis] Mar 8 15:42:09.074: INFO: Found 0 / 1 Mar 8 15:42:10.074: INFO: Selector matched 1 pods for map[app:redis] Mar 8 15:42:10.074: INFO: Found 1 / 1 Mar 8 15:42:10.074: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 8 15:42:10.083: INFO: Selector matched 1 pods for map[app:redis] Mar 8 15:42:10.083: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 8 15:42:10.083: INFO: wait on redis-master startup in e2e-tests-kubectl-ldm65 Mar 8 15:42:10.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-x78bd redis-master --namespace=e2e-tests-kubectl-ldm65' Mar 8 15:42:10.208: INFO: stderr: "" Mar 8 15:42:10.208: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 08 Mar 15:42:09.323 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 08 Mar 15:42:09.323 # Server started, Redis version 3.2.12\n1:M 08 Mar 15:42:09.323 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 08 Mar 15:42:09.323 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Mar 8 15:42:10.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-ldm65' Mar 8 15:42:10.325: INFO: stderr: "" Mar 8 15:42:10.325: INFO: stdout: "service/rm2 exposed\n" Mar 8 15:42:10.328: INFO: Service rm2 in namespace e2e-tests-kubectl-ldm65 found. STEP: exposing service Mar 8 15:42:12.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-ldm65' Mar 8 15:42:12.479: INFO: stderr: "" Mar 8 15:42:12.479: INFO: stdout: "service/rm3 exposed\n" Mar 8 15:42:12.483: INFO: Service rm3 in namespace e2e-tests-kubectl-ldm65 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:42:14.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ldm65" for this suite. Mar 8 15:42:36.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:42:36.587: INFO: namespace: e2e-tests-kubectl-ldm65, resource: bindings, ignored listing per whitelist Mar 8 15:42:36.591: INFO: namespace e2e-tests-kubectl-ldm65 deletion completed in 22.101008788s • [SLOW TEST:28.902 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:42:36.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the initial replication controller Mar 8 15:42:36.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-k552t' Mar 8 15:42:37.361: INFO: stderr: "" Mar 8 15:42:37.361: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 15:42:37.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-k552t' Mar 8 15:42:37.448: INFO: stderr: "" Mar 8 15:42:37.448: INFO: stdout: "update-demo-nautilus-5jnkd update-demo-nautilus-n9wgl " Mar 8 15:42:37.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jnkd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k552t' Mar 8 15:42:37.528: INFO: stderr: "" Mar 8 15:42:37.528: INFO: stdout: "" Mar 8 15:42:37.528: INFO: update-demo-nautilus-5jnkd is created but not running Mar 8 15:42:42.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-k552t' Mar 8 15:42:42.628: INFO: stderr: "" Mar 8 15:42:42.628: INFO: stdout: "update-demo-nautilus-5jnkd update-demo-nautilus-n9wgl " Mar 8 15:42:42.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jnkd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k552t' Mar 8 15:42:42.703: INFO: stderr: "" Mar 8 15:42:42.703: INFO: stdout: "true" Mar 8 15:42:42.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5jnkd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k552t' Mar 8 15:42:42.779: INFO: stderr: "" Mar 8 15:42:42.779: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 15:42:42.779: INFO: validating pod update-demo-nautilus-5jnkd Mar 8 15:42:42.781: INFO: got data: { "image": "nautilus.jpg" } Mar 8 15:42:42.781: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 15:42:42.781: INFO: update-demo-nautilus-5jnkd is verified up and running Mar 8 15:42:42.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9wgl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k552t' Mar 8 15:42:42.843: INFO: stderr: "" Mar 8 15:42:42.843: INFO: stdout: "true" Mar 8 15:42:42.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n9wgl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k552t' Mar 8 15:42:42.904: INFO: stderr: "" Mar 8 15:42:42.904: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 8 15:42:42.904: INFO: validating pod update-demo-nautilus-n9wgl Mar 8 15:42:42.906: INFO: got data: { "image": "nautilus.jpg" } Mar 8 15:42:42.906: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 8 15:42:42.906: INFO: update-demo-nautilus-n9wgl is verified up and running STEP: rolling-update to new replication controller Mar 8 15:42:42.907: INFO: scanned /root for discovery docs: Mar 8 15:42:42.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-k552t' Mar 8 15:43:07.436: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 8 15:43:07.436: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 8 15:43:07.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-k552t' Mar 8 15:43:07.557: INFO: stderr: "" Mar 8 15:43:07.557: INFO: stdout: "update-demo-kitten-gps58 update-demo-kitten-whbbv update-demo-nautilus-5jnkd " STEP: Replicas for name=update-demo: expected=2 actual=3 Mar 8 15:43:12.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-k552t' Mar 8 15:43:12.660: INFO: stderr: "" Mar 8 15:43:12.660: INFO: stdout: "update-demo-kitten-gps58 update-demo-kitten-whbbv " Mar 8 15:43:12.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gps58 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k552t' Mar 8 15:43:12.746: INFO: stderr: "" Mar 8 15:43:12.746: INFO: stdout: "true" Mar 8 15:43:12.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-gps58 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k552t' Mar 8 15:43:12.821: INFO: stderr: "" Mar 8 15:43:12.821: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 8 15:43:12.821: INFO: validating pod update-demo-kitten-gps58 Mar 8 15:43:12.824: INFO: got data: { "image": "kitten.jpg" } Mar 8 15:43:12.824: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 8 15:43:12.824: INFO: update-demo-kitten-gps58 is verified up and running Mar 8 15:43:12.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-whbbv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k552t' Mar 8 15:43:12.903: INFO: stderr: "" Mar 8 15:43:12.903: INFO: stdout: "true" Mar 8 15:43:12.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-whbbv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-k552t' Mar 8 15:43:12.992: INFO: stderr: "" Mar 8 15:43:12.992: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 8 15:43:12.992: INFO: validating pod update-demo-kitten-whbbv Mar 8 15:43:12.996: INFO: got data: { "image": "kitten.jpg" } Mar 8 15:43:12.996: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 8 15:43:12.996: INFO: update-demo-kitten-whbbv is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:43:12.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-k552t" for this suite. Mar 8 15:43:35.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:43:35.018: INFO: namespace: e2e-tests-kubectl-k552t, resource: bindings, ignored listing per whitelist Mar 8 15:43:35.106: INFO: namespace e2e-tests-kubectl-k552t deletion completed in 22.10807092s • [SLOW TEST:58.515 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:43:35.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 8 15:43:35.226: INFO: Waiting up to 5m0s for pod "pod-9242360d-6153-11ea-b38e-0242ac11000f" in namespace "e2e-tests-emptydir-gzbnw" to be "success or failure" Mar 8 15:43:35.241: INFO: Pod "pod-9242360d-6153-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.821028ms Mar 8 15:43:37.245: INFO: Pod "pod-9242360d-6153-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018905117s STEP: Saw pod success Mar 8 15:43:37.245: INFO: Pod "pod-9242360d-6153-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:43:37.248: INFO: Trying to get logs from node hunter-worker pod pod-9242360d-6153-11ea-b38e-0242ac11000f container test-container: STEP: delete the pod Mar 8 15:43:37.268: INFO: Waiting for pod pod-9242360d-6153-11ea-b38e-0242ac11000f to disappear Mar 8 15:43:37.297: INFO: Pod pod-9242360d-6153-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:43:37.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-gzbnw" for this suite. Mar 8 15:43:43.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:43:43.402: INFO: namespace: e2e-tests-emptydir-gzbnw, resource: bindings, ignored listing per whitelist Mar 8 15:43:43.407: INFO: namespace e2e-tests-emptydir-gzbnw deletion completed in 6.10621702s • [SLOW TEST:8.300 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:43:43.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 8 15:43:43.531: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97353619-6153-11ea-b38e-0242ac11000f" in namespace "e2e-tests-downward-api-jm7pp" to be "success or failure" Mar 8 15:43:43.534: INFO: Pod "downwardapi-volume-97353619-6153-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.652896ms Mar 8 15:43:45.538: INFO: Pod "downwardapi-volume-97353619-6153-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007739543s STEP: Saw pod success Mar 8 15:43:45.539: INFO: Pod "downwardapi-volume-97353619-6153-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:43:45.541: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-97353619-6153-11ea-b38e-0242ac11000f container client-container: STEP: delete the pod Mar 8 15:43:45.560: INFO: Waiting for pod downwardapi-volume-97353619-6153-11ea-b38e-0242ac11000f to disappear Mar 8 15:43:45.581: INFO: Pod downwardapi-volume-97353619-6153-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:43:45.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jm7pp" for this suite. Mar 8 15:43:51.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:43:51.606: INFO: namespace: e2e-tests-downward-api-jm7pp, resource: bindings, ignored listing per whitelist Mar 8 15:43:51.648: INFO: namespace e2e-tests-downward-api-jm7pp deletion completed in 6.062675931s • [SLOW TEST:8.241 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:43:51.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Mar 8 15:43:51.709: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:43:55.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-44cvk" for this suite. Mar 8 15:44:01.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:44:01.130: INFO: namespace: e2e-tests-init-container-44cvk, resource: bindings, ignored listing per whitelist Mar 8 15:44:01.232: INFO: namespace e2e-tests-init-container-44cvk deletion completed in 6.157560375s • [SLOW TEST:9.583 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:44:01.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-a1caee63-6153-11ea-b38e-0242ac11000f STEP: Creating secret with name s-test-opt-upd-a1caeeb0-6153-11ea-b38e-0242ac11000f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-a1caee63-6153-11ea-b38e-0242ac11000f STEP: Updating secret s-test-opt-upd-a1caeeb0-6153-11ea-b38e-0242ac11000f STEP: Creating secret with name s-test-opt-create-a1caeece-6153-11ea-b38e-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:45:18.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r9fpc" for this suite. Mar 8 15:45:40.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:45:40.235: INFO: namespace: e2e-tests-projected-r9fpc, resource: bindings, ignored listing per whitelist Mar 8 15:45:40.237: INFO: namespace e2e-tests-projected-r9fpc deletion completed in 22.107090976s • [SLOW TEST:99.005 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:45:40.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 8 15:45:40.309: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dcd045fb-6153-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-hgftr" to be "success or failure" Mar 8 15:45:40.324: INFO: Pod "downwardapi-volume-dcd045fb-6153-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.965453ms Mar 8 15:45:42.330: INFO: Pod "downwardapi-volume-dcd045fb-6153-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02001524s STEP: Saw pod success Mar 8 15:45:42.330: INFO: Pod "downwardapi-volume-dcd045fb-6153-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:45:42.332: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-dcd045fb-6153-11ea-b38e-0242ac11000f container client-container: STEP: delete the pod Mar 8 15:45:42.347: INFO: Waiting for pod downwardapi-volume-dcd045fb-6153-11ea-b38e-0242ac11000f to disappear Mar 8 15:45:42.351: INFO: Pod downwardapi-volume-dcd045fb-6153-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:45:42.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hgftr" for this suite. Mar 8 15:45:48.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:45:48.431: INFO: namespace: e2e-tests-projected-hgftr, resource: bindings, ignored listing per whitelist Mar 8 15:45:48.437: INFO: namespace e2e-tests-projected-hgftr deletion completed in 6.081898473s • [SLOW TEST:8.199 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:45:48.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 8 15:45:48.594: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-9hs48,SelfLink:/api/v1/namespaces/e2e-tests-watch-9hs48/configmaps/e2e-watch-test-resource-version,UID:e1ba45ea-6153-11ea-9978-0242ac11000d,ResourceVersion:15878,Generation:0,CreationTimestamp:2020-03-08 15:45:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 8 15:45:48.594: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-9hs48,SelfLink:/api/v1/namespaces/e2e-tests-watch-9hs48/configmaps/e2e-watch-test-resource-version,UID:e1ba45ea-6153-11ea-9978-0242ac11000d,ResourceVersion:15879,Generation:0,CreationTimestamp:2020-03-08 15:45:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:45:48.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-9hs48" for this suite. Mar 8 15:45:54.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:45:54.638: INFO: namespace: e2e-tests-watch-9hs48, resource: bindings, ignored listing per whitelist Mar 8 15:45:54.684: INFO: namespace e2e-tests-watch-9hs48 deletion completed in 6.07960288s • [SLOW TEST:6.247 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:45:54.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test env composition Mar 8 15:45:54.772: INFO: Waiting up to 5m0s for pod "var-expansion-e56f6538-6153-11ea-b38e-0242ac11000f" in namespace "e2e-tests-var-expansion-r9ljc" to be "success or failure" Mar 8 15:45:54.776: INFO: Pod "var-expansion-e56f6538-6153-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.75824ms Mar 8 15:45:56.780: INFO: Pod "var-expansion-e56f6538-6153-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008781452s STEP: Saw pod success Mar 8 15:45:56.780: INFO: Pod "var-expansion-e56f6538-6153-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:45:56.783: INFO: Trying to get logs from node hunter-worker pod var-expansion-e56f6538-6153-11ea-b38e-0242ac11000f container dapi-container: STEP: delete the pod Mar 8 15:45:56.822: INFO: Waiting for pod var-expansion-e56f6538-6153-11ea-b38e-0242ac11000f to disappear Mar 8 15:45:56.825: INFO: Pod var-expansion-e56f6538-6153-11ea-b38e-0242ac11000f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:45:56.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-r9ljc" for this suite. Mar 8 15:46:02.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:46:02.883: INFO: namespace: e2e-tests-var-expansion-r9ljc, resource: bindings, ignored listing per whitelist Mar 8 15:46:02.919: INFO: namespace e2e-tests-var-expansion-r9ljc deletion completed in 6.087929905s • [SLOW TEST:8.235 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:46:02.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-ea595e39-6153-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 8 15:46:03.068: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ea59e11c-6153-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-rdpvb" to be "success or failure" Mar 8 15:46:03.075: INFO: Pod "pod-projected-configmaps-ea59e11c-6153-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.155935ms Mar 8 15:46:05.080: INFO: Pod "pod-projected-configmaps-ea59e11c-6153-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011537652s STEP: Saw pod success Mar 8 15:46:05.080: INFO: Pod "pod-projected-configmaps-ea59e11c-6153-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:46:05.083: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-ea59e11c-6153-11ea-b38e-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 8 15:46:05.145: INFO: Waiting for pod pod-projected-configmaps-ea59e11c-6153-11ea-b38e-0242ac11000f to disappear Mar 8 15:46:05.153: INFO: Pod pod-projected-configmaps-ea59e11c-6153-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:46:05.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-rdpvb" for this suite. Mar 8 15:46:11.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:46:11.198: INFO: namespace: e2e-tests-projected-rdpvb, resource: bindings, ignored listing per whitelist Mar 8 15:46:11.240: INFO: namespace e2e-tests-projected-rdpvb deletion completed in 6.082330752s • [SLOW TEST:8.320 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:46:11.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 8 15:46:11.312: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ef4b036b-6153-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-7h7hg" to be "success or failure" Mar 8 15:46:11.322: INFO: Pod "downwardapi-volume-ef4b036b-6153-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.614563ms Mar 8 15:46:13.325: INFO: Pod "downwardapi-volume-ef4b036b-6153-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012917869s STEP: Saw pod success Mar 8 15:46:13.325: INFO: Pod "downwardapi-volume-ef4b036b-6153-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:46:13.328: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-ef4b036b-6153-11ea-b38e-0242ac11000f container client-container: STEP: delete the pod Mar 8 15:46:13.378: INFO: Waiting for pod downwardapi-volume-ef4b036b-6153-11ea-b38e-0242ac11000f to disappear Mar 8 15:46:13.395: INFO: Pod downwardapi-volume-ef4b036b-6153-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:46:13.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7h7hg" for this suite. Mar 8 15:46:19.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:46:19.479: INFO: namespace: e2e-tests-projected-7h7hg, resource: bindings, ignored listing per whitelist Mar 8 15:46:19.485: INFO: namespace e2e-tests-projected-7h7hg deletion completed in 6.08689437s • [SLOW TEST:8.245 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:46:19.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-4zfz7 Mar 8 15:46:21.595: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-4zfz7 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 15:46:21.597: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:50:22.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-4zfz7" for this suite. Mar 8 15:50:28.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:50:28.490: INFO: namespace: e2e-tests-container-probe-4zfz7, resource: bindings, ignored listing per whitelist Mar 8 15:50:28.534: INFO: namespace e2e-tests-container-probe-4zfz7 deletion completed in 6.089099951s • [SLOW TEST:249.049 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:50:28.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 8 15:50:32.714: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 15:50:32.759: INFO: Pod pod-with-prestop-http-hook still exists Mar 8 15:50:34.759: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 15:50:34.763: INFO: Pod pod-with-prestop-http-hook still exists Mar 8 15:50:36.759: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 8 15:50:36.763: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:50:36.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-9qs2c" for this suite. Mar 8 15:50:58.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:50:58.861: INFO: namespace: e2e-tests-container-lifecycle-hook-9qs2c, resource: bindings, ignored listing per whitelist Mar 8 15:50:58.875: INFO: namespace e2e-tests-container-lifecycle-hook-9qs2c deletion completed in 22.09870212s • [SLOW TEST:30.340 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:50:58.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0308 15:51:29.541339 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 15:51:29.541: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:51:29.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-rfnsg" for this suite. Mar 8 15:51:35.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:51:35.605: INFO: namespace: e2e-tests-gc-rfnsg, resource: bindings, ignored listing per whitelist Mar 8 15:51:35.639: INFO: namespace e2e-tests-gc-rfnsg deletion completed in 6.095237844s • [SLOW TEST:36.764 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:51:35.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 8 15:51:35.749: INFO: Waiting up to 5m0s for pod "pod-b0aa370a-6154-11ea-b38e-0242ac11000f" in namespace "e2e-tests-emptydir-7ng67" to be "success or failure" Mar 8 15:51:35.754: INFO: Pod "pod-b0aa370a-6154-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265876ms Mar 8 15:51:37.771: INFO: Pod "pod-b0aa370a-6154-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021596597s STEP: Saw pod success Mar 8 15:51:37.771: INFO: Pod "pod-b0aa370a-6154-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:51:37.774: INFO: Trying to get logs from node hunter-worker pod pod-b0aa370a-6154-11ea-b38e-0242ac11000f container test-container: STEP: delete the pod Mar 8 15:51:37.801: INFO: Waiting for pod pod-b0aa370a-6154-11ea-b38e-0242ac11000f to disappear Mar 8 15:51:37.810: INFO: Pod pod-b0aa370a-6154-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:51:37.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-7ng67" for this suite. Mar 8 15:51:43.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:51:43.904: INFO: namespace: e2e-tests-emptydir-7ng67, resource: bindings, ignored listing per whitelist Mar 8 15:51:43.919: INFO: namespace e2e-tests-emptydir-7ng67 deletion completed in 6.105714131s • [SLOW TEST:8.280 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:51:43.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 8 15:51:44.038: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b596a9b8-6154-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-cdl6g" to be "success or failure" Mar 8 15:51:44.051: INFO: Pod "downwardapi-volume-b596a9b8-6154-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.277574ms Mar 8 15:51:46.054: INFO: Pod "downwardapi-volume-b596a9b8-6154-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016172669s STEP: Saw pod success Mar 8 15:51:46.054: INFO: Pod "downwardapi-volume-b596a9b8-6154-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:51:46.076: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-b596a9b8-6154-11ea-b38e-0242ac11000f container client-container: STEP: delete the pod Mar 8 15:51:46.111: INFO: Waiting for pod downwardapi-volume-b596a9b8-6154-11ea-b38e-0242ac11000f to disappear Mar 8 15:51:46.116: INFO: Pod downwardapi-volume-b596a9b8-6154-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:51:46.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cdl6g" for this suite. Mar 8 15:51:52.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:51:52.178: INFO: namespace: e2e-tests-projected-cdl6g, resource: bindings, ignored listing per whitelist Mar 8 15:51:52.204: INFO: namespace e2e-tests-projected-cdl6g deletion completed in 6.084901221s • [SLOW TEST:8.284 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:51:52.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 8 15:51:52.305: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:51:52.306: INFO: Number of nodes with available pods: 0 Mar 8 15:51:52.306: INFO: Node hunter-worker is running more than one daemon pod Mar 8 15:51:53.309: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:51:53.312: INFO: Number of nodes with available pods: 0 Mar 8 15:51:53.312: INFO: Node hunter-worker is running more than one daemon pod Mar 8 15:51:54.311: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:51:54.313: INFO: Number of nodes with available pods: 0 Mar 8 15:51:54.314: INFO: Node hunter-worker is running more than one daemon pod Mar 8 15:51:55.342: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:51:55.345: INFO: Number of nodes with available pods: 2 Mar 8 15:51:55.345: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 8 15:51:55.362: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:51:55.379: INFO: Number of nodes with available pods: 1 Mar 8 15:51:55.379: INFO: Node hunter-worker2 is running more than one daemon pod Mar 8 15:51:56.382: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:51:56.384: INFO: Number of nodes with available pods: 1 Mar 8 15:51:56.384: INFO: Node hunter-worker2 is running more than one daemon pod Mar 8 15:51:57.384: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 8 15:51:57.387: INFO: Number of nodes with available pods: 2 Mar 8 15:51:57.387: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-cbkgc, will wait for the garbage collector to delete the pods Mar 8 15:51:57.449: INFO: Deleting DaemonSet.extensions daemon-set took: 5.400794ms Mar 8 15:51:57.549: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.187203ms Mar 8 15:52:07.952: INFO: Number of nodes with available pods: 0 Mar 8 15:52:07.952: INFO: Number of running nodes: 0, number of available pods: 0 Mar 8 15:52:07.955: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-cbkgc/daemonsets","resourceVersion":"16911"},"items":null} Mar 8 15:52:07.957: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-cbkgc/pods","resourceVersion":"16911"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:52:07.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-cbkgc" for this suite. Mar 8 15:52:13.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:52:14.012: INFO: namespace: e2e-tests-daemonsets-cbkgc, resource: bindings, ignored listing per whitelist Mar 8 15:52:14.081: INFO: namespace e2e-tests-daemonsets-cbkgc deletion completed in 6.112172934s • [SLOW TEST:21.877 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:52:14.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 8 15:52:14.216: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:52:16.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-whrf9" for this suite. Mar 8 15:53:00.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:53:00.364: INFO: namespace: e2e-tests-pods-whrf9, resource: bindings, ignored listing per whitelist Mar 8 15:53:00.432: INFO: namespace e2e-tests-pods-whrf9 deletion completed in 44.103987453s • [SLOW TEST:46.350 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:53:00.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Mar 8 15:53:00.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 8 15:53:02.179: INFO: stderr: "" Mar 8 15:53:02.179: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32774\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32774/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:53:02.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-2c967" for this suite. Mar 8 15:53:08.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:53:08.260: INFO: namespace: e2e-tests-kubectl-2c967, resource: bindings, ignored listing per whitelist Mar 8 15:53:08.308: INFO: namespace e2e-tests-kubectl-2c967 deletion completed in 6.124860859s • [SLOW TEST:7.875 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:53:08.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 8 15:53:08.392: INFO: Waiting up to 5m0s for pod "pod-e7e44112-6154-11ea-b38e-0242ac11000f" in namespace "e2e-tests-emptydir-sqgtz" to be "success or failure" Mar 8 15:53:08.408: INFO: Pod "pod-e7e44112-6154-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.501117ms Mar 8 15:53:10.411: INFO: Pod "pod-e7e44112-6154-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019315644s STEP: Saw pod success Mar 8 15:53:10.411: INFO: Pod "pod-e7e44112-6154-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:53:10.421: INFO: Trying to get logs from node hunter-worker2 pod pod-e7e44112-6154-11ea-b38e-0242ac11000f container test-container: STEP: delete the pod Mar 8 15:53:10.447: INFO: Waiting for pod pod-e7e44112-6154-11ea-b38e-0242ac11000f to disappear Mar 8 15:53:10.451: INFO: Pod pod-e7e44112-6154-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:53:10.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-sqgtz" for this suite. Mar 8 15:53:16.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:53:16.546: INFO: namespace: e2e-tests-emptydir-sqgtz, resource: bindings, ignored listing per whitelist Mar 8 15:53:16.560: INFO: namespace e2e-tests-emptydir-sqgtz deletion completed in 6.105874133s • [SLOW TEST:8.252 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:53:16.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 8 15:53:16.644: INFO: Waiting up to 5m0s for pod "downward-api-ecce42b4-6154-11ea-b38e-0242ac11000f" in namespace "e2e-tests-downward-api-c266m" to be "success or failure" Mar 8 15:53:16.664: INFO: Pod "downward-api-ecce42b4-6154-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.961539ms Mar 8 15:53:18.669: INFO: Pod "downward-api-ecce42b4-6154-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024167284s Mar 8 15:53:20.673: INFO: Pod "downward-api-ecce42b4-6154-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028453507s STEP: Saw pod success Mar 8 15:53:20.673: INFO: Pod "downward-api-ecce42b4-6154-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 15:53:20.676: INFO: Trying to get logs from node hunter-worker2 pod downward-api-ecce42b4-6154-11ea-b38e-0242ac11000f container dapi-container: STEP: delete the pod Mar 8 15:53:20.700: INFO: Waiting for pod downward-api-ecce42b4-6154-11ea-b38e-0242ac11000f to disappear Mar 8 15:53:20.702: INFO: Pod downward-api-ecce42b4-6154-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:53:20.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-c266m" for this suite. Mar 8 15:53:26.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:53:26.771: INFO: namespace: e2e-tests-downward-api-c266m, resource: bindings, ignored listing per whitelist Mar 8 15:53:26.817: INFO: namespace e2e-tests-downward-api-c266m deletion completed in 6.098223087s • [SLOW TEST:10.257 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:53:26.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Mar 8 15:53:26.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-bq5gv' Mar 8 15:53:26.983: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 8 15:53:26.983: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Mar 8 15:53:29.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-bq5gv' Mar 8 15:53:29.144: INFO: stderr: "" Mar 8 15:53:29.144: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:53:29.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bq5gv" for this suite. Mar 8 15:53:51.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:53:51.180: INFO: namespace: e2e-tests-kubectl-bq5gv, resource: bindings, ignored listing per whitelist Mar 8 15:53:51.256: INFO: namespace e2e-tests-kubectl-bq5gv deletion completed in 22.108429179s • [SLOW TEST:24.438 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:53:51.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:53:53.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-6nvc5" for this suite. Mar 8 15:54:31.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:54:31.470: INFO: namespace: e2e-tests-kubelet-test-6nvc5, resource: bindings, ignored listing per whitelist Mar 8 15:54:31.500: INFO: namespace e2e-tests-kubelet-test-6nvc5 deletion completed in 38.12098021s • [SLOW TEST:40.243 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:54:31.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:54:59.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-runtime-s7xzs" for this suite. Mar 8 15:55:05.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:55:05.144: INFO: namespace: e2e-tests-container-runtime-s7xzs, resource: bindings, ignored listing per whitelist Mar 8 15:55:05.164: INFO: namespace e2e-tests-container-runtime-s7xzs deletion completed in 6.115282366s • [SLOW TEST:33.664 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:55:05.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0308 15:55:45.273184 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 8 15:55:45.273: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:55:45.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-b8bzb" for this suite. Mar 8 15:55:53.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:55:53.298: INFO: namespace: e2e-tests-gc-b8bzb, resource: bindings, ignored listing per whitelist Mar 8 15:55:53.345: INFO: namespace e2e-tests-gc-b8bzb deletion completed in 8.06978324s • [SLOW TEST:48.180 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:55:53.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-4a434cfb-6155-11ea-b38e-0242ac11000f STEP: Creating secret with name s-test-opt-upd-4a434d5b-6155-11ea-b38e-0242ac11000f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-4a434cfb-6155-11ea-b38e-0242ac11000f STEP: Updating secret s-test-opt-upd-4a434d5b-6155-11ea-b38e-0242ac11000f STEP: Creating secret with name s-test-opt-create-4a434d81-6155-11ea-b38e-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:57:13.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-xsmvg" for this suite. Mar 8 15:57:35.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:57:35.916: INFO: namespace: e2e-tests-secrets-xsmvg, resource: bindings, ignored listing per whitelist Mar 8 15:57:35.928: INFO: namespace e2e-tests-secrets-xsmvg deletion completed in 22.080266728s • [SLOW TEST:102.583 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:57:35.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: executing a command with run --rm and attach with stdin Mar 8 15:57:36.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-zdlkp run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 8 15:57:38.307: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0308 15:57:38.244103 3642 log.go:172] (0xc0008b2370) (0xc0007868c0) Create stream\nI0308 15:57:38.244178 3642 log.go:172] (0xc0008b2370) (0xc0007868c0) Stream added, broadcasting: 1\nI0308 15:57:38.246842 3642 log.go:172] (0xc0008b2370) Reply frame received for 1\nI0308 15:57:38.246898 3642 log.go:172] (0xc0008b2370) (0xc00041ad20) Create stream\nI0308 15:57:38.246914 3642 log.go:172] (0xc0008b2370) (0xc00041ad20) Stream added, broadcasting: 3\nI0308 15:57:38.247875 3642 log.go:172] (0xc0008b2370) Reply frame received for 3\nI0308 15:57:38.247929 3642 log.go:172] (0xc0008b2370) (0xc0003f4a00) Create stream\nI0308 15:57:38.247941 3642 log.go:172] (0xc0008b2370) (0xc0003f4a00) Stream added, broadcasting: 5\nI0308 15:57:38.248764 3642 log.go:172] (0xc0008b2370) Reply frame received for 5\nI0308 15:57:38.248782 3642 log.go:172] (0xc0008b2370) (0xc00041ae60) Create stream\nI0308 15:57:38.248788 3642 log.go:172] (0xc0008b2370) (0xc00041ae60) Stream added, broadcasting: 7\nI0308 15:57:38.249666 3642 log.go:172] (0xc0008b2370) Reply frame received for 7\nI0308 15:57:38.249861 3642 log.go:172] (0xc00041ad20) (3) Writing data frame\nI0308 15:57:38.250011 3642 log.go:172] (0xc00041ad20) (3) Writing data frame\nI0308 15:57:38.252593 3642 log.go:172] (0xc0008b2370) Data frame received for 5\nI0308 15:57:38.252606 3642 log.go:172] (0xc0003f4a00) (5) Data frame handling\nI0308 15:57:38.252615 3642 log.go:172] (0xc0003f4a00) (5) Data frame sent\nI0308 15:57:38.253264 3642 log.go:172] (0xc0008b2370) Data frame received for 5\nI0308 15:57:38.253283 3642 log.go:172] (0xc0003f4a00) (5) Data frame handling\nI0308 15:57:38.253297 3642 log.go:172] (0xc0003f4a00) (5) Data frame sent\nI0308 15:57:38.287445 3642 log.go:172] (0xc0008b2370) Data frame received for 7\nI0308 15:57:38.287461 3642 log.go:172] (0xc00041ae60) (7) Data frame handling\nI0308 15:57:38.287840 3642 log.go:172] (0xc0008b2370) Data frame received for 5\nI0308 15:57:38.287863 3642 log.go:172] (0xc0003f4a00) (5) Data frame handling\nI0308 15:57:38.288261 3642 log.go:172] (0xc0008b2370) Data frame received for 1\nI0308 15:57:38.288278 3642 log.go:172] (0xc0007868c0) (1) Data frame handling\nI0308 15:57:38.288288 3642 log.go:172] (0xc0007868c0) (1) Data frame sent\nI0308 15:57:38.288367 3642 log.go:172] (0xc0008b2370) (0xc00041ad20) Stream removed, broadcasting: 3\nI0308 15:57:38.288399 3642 log.go:172] (0xc0008b2370) (0xc0007868c0) Stream removed, broadcasting: 1\nI0308 15:57:38.288462 3642 log.go:172] (0xc0008b2370) (0xc0007868c0) Stream removed, broadcasting: 1\nI0308 15:57:38.288479 3642 log.go:172] (0xc0008b2370) (0xc00041ad20) Stream removed, broadcasting: 3\nI0308 15:57:38.288488 3642 log.go:172] (0xc0008b2370) (0xc0003f4a00) Stream removed, broadcasting: 5\nI0308 15:57:38.288628 3642 log.go:172] (0xc0008b2370) (0xc00041ae60) Stream removed, broadcasting: 7\nI0308 15:57:38.289026 3642 log.go:172] (0xc0008b2370) Go away received\n" Mar 8 15:57:38.307: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 15:57:40.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-zdlkp" for this suite. Mar 8 15:57:46.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 15:57:46.424: INFO: namespace: e2e-tests-kubectl-zdlkp, resource: bindings, ignored listing per whitelist Mar 8 15:57:46.443: INFO: namespace e2e-tests-kubectl-zdlkp deletion completed in 6.130057193s • [SLOW TEST:10.515 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 15:57:46.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-5hwz5 Mar 8 15:57:48.540: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-5hwz5 STEP: checking the pod's current state and verifying that restartCount is present Mar 8 15:57:48.543: INFO: Initial restart count of pod liveness-http is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:01:49.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-5hwz5" for this suite. Mar 8 16:01:55.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:01:55.357: INFO: namespace: e2e-tests-container-probe-5hwz5, resource: bindings, ignored listing per whitelist Mar 8 16:01:55.392: INFO: namespace e2e-tests-container-probe-5hwz5 deletion completed in 6.115047983s • [SLOW TEST:248.950 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:01:55.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-xlnv5 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 8 16:01:55.474: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 8 16:02:19.612: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.156:8080/dial?request=hostName&protocol=udp&host=10.244.2.133&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-xlnv5 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 16:02:19.612: INFO: >>> kubeConfig: /root/.kube/config I0308 16:02:19.645860 6 log.go:172] (0xc000ace370) (0xc000f37d60) Create stream I0308 16:02:19.645901 6 log.go:172] (0xc000ace370) (0xc000f37d60) Stream added, broadcasting: 1 I0308 16:02:19.648544 6 log.go:172] (0xc000ace370) Reply frame received for 1 I0308 16:02:19.648589 6 log.go:172] (0xc000ace370) (0xc000f37ea0) Create stream I0308 16:02:19.648607 6 log.go:172] (0xc000ace370) (0xc000f37ea0) Stream added, broadcasting: 3 I0308 16:02:19.649387 6 log.go:172] (0xc000ace370) Reply frame received for 3 I0308 16:02:19.649414 6 log.go:172] (0xc000ace370) (0xc0012a8500) Create stream I0308 16:02:19.649424 6 log.go:172] (0xc000ace370) (0xc0012a8500) Stream added, broadcasting: 5 I0308 16:02:19.650280 6 log.go:172] (0xc000ace370) Reply frame received for 5 I0308 16:02:19.712428 6 log.go:172] (0xc000ace370) Data frame received for 3 I0308 16:02:19.712451 6 log.go:172] (0xc000f37ea0) (3) Data frame handling I0308 16:02:19.712469 6 log.go:172] (0xc000f37ea0) (3) Data frame sent I0308 16:02:19.713303 6 log.go:172] (0xc000ace370) Data frame received for 3 I0308 16:02:19.713344 6 log.go:172] (0xc000ace370) Data frame received for 5 I0308 16:02:19.713385 6 log.go:172] (0xc0012a8500) (5) Data frame handling I0308 16:02:19.713411 6 log.go:172] (0xc000f37ea0) (3) Data frame handling I0308 16:02:19.715112 6 log.go:172] (0xc000ace370) Data frame received for 1 I0308 16:02:19.715139 6 log.go:172] (0xc000f37d60) (1) Data frame handling I0308 16:02:19.715160 6 log.go:172] (0xc000f37d60) (1) Data frame sent I0308 16:02:19.715176 6 log.go:172] (0xc000ace370) (0xc000f37d60) Stream removed, broadcasting: 1 I0308 16:02:19.715195 6 log.go:172] (0xc000ace370) Go away received I0308 16:02:19.715299 6 log.go:172] (0xc000ace370) (0xc000f37d60) Stream removed, broadcasting: 1 I0308 16:02:19.715318 6 log.go:172] (0xc000ace370) (0xc000f37ea0) Stream removed, broadcasting: 3 I0308 16:02:19.715332 6 log.go:172] (0xc000ace370) (0xc0012a8500) Stream removed, broadcasting: 5 Mar 8 16:02:19.715: INFO: Waiting for endpoints: map[] Mar 8 16:02:19.719: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.156:8080/dial?request=hostName&protocol=udp&host=10.244.1.155&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-xlnv5 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 16:02:19.719: INFO: >>> kubeConfig: /root/.kube/config I0308 16:02:19.755227 6 log.go:172] (0xc0000eb4a0) (0xc0012a8780) Create stream I0308 16:02:19.755254 6 log.go:172] (0xc0000eb4a0) (0xc0012a8780) Stream added, broadcasting: 1 I0308 16:02:19.758169 6 log.go:172] (0xc0000eb4a0) Reply frame received for 1 I0308 16:02:19.758205 6 log.go:172] (0xc0000eb4a0) (0xc0012a8820) Create stream I0308 16:02:19.758215 6 log.go:172] (0xc0000eb4a0) (0xc0012a8820) Stream added, broadcasting: 3 I0308 16:02:19.759168 6 log.go:172] (0xc0000eb4a0) Reply frame received for 3 I0308 16:02:19.759221 6 log.go:172] (0xc0000eb4a0) (0xc001384640) Create stream I0308 16:02:19.759233 6 log.go:172] (0xc0000eb4a0) (0xc001384640) Stream added, broadcasting: 5 I0308 16:02:19.760081 6 log.go:172] (0xc0000eb4a0) Reply frame received for 5 I0308 16:02:19.834810 6 log.go:172] (0xc0000eb4a0) Data frame received for 3 I0308 16:02:19.834837 6 log.go:172] (0xc0012a8820) (3) Data frame handling I0308 16:02:19.834857 6 log.go:172] (0xc0012a8820) (3) Data frame sent I0308 16:02:19.835612 6 log.go:172] (0xc0000eb4a0) Data frame received for 3 I0308 16:02:19.835648 6 log.go:172] (0xc0012a8820) (3) Data frame handling I0308 16:02:19.835673 6 log.go:172] (0xc0000eb4a0) Data frame received for 5 I0308 16:02:19.835705 6 log.go:172] (0xc001384640) (5) Data frame handling I0308 16:02:19.837185 6 log.go:172] (0xc0000eb4a0) Data frame received for 1 I0308 16:02:19.837207 6 log.go:172] (0xc0012a8780) (1) Data frame handling I0308 16:02:19.837227 6 log.go:172] (0xc0012a8780) (1) Data frame sent I0308 16:02:19.837245 6 log.go:172] (0xc0000eb4a0) (0xc0012a8780) Stream removed, broadcasting: 1 I0308 16:02:19.837263 6 log.go:172] (0xc0000eb4a0) Go away received I0308 16:02:19.837373 6 log.go:172] (0xc0000eb4a0) (0xc0012a8780) Stream removed, broadcasting: 1 I0308 16:02:19.837391 6 log.go:172] (0xc0000eb4a0) (0xc0012a8820) Stream removed, broadcasting: 3 I0308 16:02:19.837403 6 log.go:172] (0xc0000eb4a0) (0xc001384640) Stream removed, broadcasting: 5 Mar 8 16:02:19.837: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:02:19.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-xlnv5" for this suite. Mar 8 16:02:39.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:02:39.935: INFO: namespace: e2e-tests-pod-network-test-xlnv5, resource: bindings, ignored listing per whitelist Mar 8 16:02:39.947: INFO: namespace e2e-tests-pod-network-test-xlnv5 deletion completed in 20.105659756s • [SLOW TEST:44.554 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:02:39.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-3ca4e488-6156-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 8 16:02:40.089: INFO: Waiting up to 5m0s for pod "pod-configmaps-3ca5eeed-6156-11ea-b38e-0242ac11000f" in namespace "e2e-tests-configmap-94wmf" to be "success or failure" Mar 8 16:02:40.093: INFO: Pod "pod-configmaps-3ca5eeed-6156-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291267ms Mar 8 16:02:42.097: INFO: Pod "pod-configmaps-3ca5eeed-6156-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008634642s STEP: Saw pod success Mar 8 16:02:42.098: INFO: Pod "pod-configmaps-3ca5eeed-6156-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:02:42.101: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-3ca5eeed-6156-11ea-b38e-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 8 16:02:42.154: INFO: Waiting for pod pod-configmaps-3ca5eeed-6156-11ea-b38e-0242ac11000f to disappear Mar 8 16:02:42.159: INFO: Pod pod-configmaps-3ca5eeed-6156-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:02:42.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-94wmf" for this suite. Mar 8 16:02:48.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:02:48.192: INFO: namespace: e2e-tests-configmap-94wmf, resource: bindings, ignored listing per whitelist Mar 8 16:02:48.252: INFO: namespace e2e-tests-configmap-94wmf deletion completed in 6.090040005s • [SLOW TEST:8.305 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:02:48.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 8 16:03:08.360: INFO: Container started at 2020-03-08 16:02:50 +0000 UTC, pod became ready at 2020-03-08 16:03:08 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:03:08.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-hngks" for this suite. Mar 8 16:03:30.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:03:30.429: INFO: namespace: e2e-tests-container-probe-hngks, resource: bindings, ignored listing per whitelist Mar 8 16:03:30.458: INFO: namespace e2e-tests-container-probe-hngks deletion completed in 22.095952242s • [SLOW TEST:42.206 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:03:30.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 8 16:03:30.555: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5abaea1e-6156-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-ppgj9" to be "success or failure" Mar 8 16:03:30.559: INFO: Pod "downwardapi-volume-5abaea1e-6156-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085476ms Mar 8 16:03:32.562: INFO: Pod "downwardapi-volume-5abaea1e-6156-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007112931s Mar 8 16:03:34.565: INFO: Pod "downwardapi-volume-5abaea1e-6156-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010142523s STEP: Saw pod success Mar 8 16:03:34.565: INFO: Pod "downwardapi-volume-5abaea1e-6156-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:03:34.567: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-5abaea1e-6156-11ea-b38e-0242ac11000f container client-container: STEP: delete the pod Mar 8 16:03:34.603: INFO: Waiting for pod downwardapi-volume-5abaea1e-6156-11ea-b38e-0242ac11000f to disappear Mar 8 16:03:34.613: INFO: Pod downwardapi-volume-5abaea1e-6156-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:03:34.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-ppgj9" for this suite. Mar 8 16:03:40.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:03:40.679: INFO: namespace: e2e-tests-projected-ppgj9, resource: bindings, ignored listing per whitelist Mar 8 16:03:40.745: INFO: namespace e2e-tests-projected-ppgj9 deletion completed in 6.13008509s • [SLOW TEST:10.287 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:03:40.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 8 16:03:40.856: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-w2rb5,SelfLink:/api/v1/namespaces/e2e-tests-watch-w2rb5/configmaps/e2e-watch-test-label-changed,UID:60d9c483-6156-11ea-9978-0242ac11000d,ResourceVersion:18921,Generation:0,CreationTimestamp:2020-03-08 16:03:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 8 16:03:40.856: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-w2rb5,SelfLink:/api/v1/namespaces/e2e-tests-watch-w2rb5/configmaps/e2e-watch-test-label-changed,UID:60d9c483-6156-11ea-9978-0242ac11000d,ResourceVersion:18922,Generation:0,CreationTimestamp:2020-03-08 16:03:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 8 16:03:40.856: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-w2rb5,SelfLink:/api/v1/namespaces/e2e-tests-watch-w2rb5/configmaps/e2e-watch-test-label-changed,UID:60d9c483-6156-11ea-9978-0242ac11000d,ResourceVersion:18923,Generation:0,CreationTimestamp:2020-03-08 16:03:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 8 16:03:50.883: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-w2rb5,SelfLink:/api/v1/namespaces/e2e-tests-watch-w2rb5/configmaps/e2e-watch-test-label-changed,UID:60d9c483-6156-11ea-9978-0242ac11000d,ResourceVersion:18944,Generation:0,CreationTimestamp:2020-03-08 16:03:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 8 16:03:50.883: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-w2rb5,SelfLink:/api/v1/namespaces/e2e-tests-watch-w2rb5/configmaps/e2e-watch-test-label-changed,UID:60d9c483-6156-11ea-9978-0242ac11000d,ResourceVersion:18945,Generation:0,CreationTimestamp:2020-03-08 16:03:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 8 16:03:50.883: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-w2rb5,SelfLink:/api/v1/namespaces/e2e-tests-watch-w2rb5/configmaps/e2e-watch-test-label-changed,UID:60d9c483-6156-11ea-9978-0242ac11000d,ResourceVersion:18946,Generation:0,CreationTimestamp:2020-03-08 16:03:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:03:50.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-w2rb5" for this suite. Mar 8 16:03:56.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:03:56.968: INFO: namespace: e2e-tests-watch-w2rb5, resource: bindings, ignored listing per whitelist Mar 8 16:03:56.980: INFO: namespace e2e-tests-watch-w2rb5 deletion completed in 6.093315333s • [SLOW TEST:16.235 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:03:56.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-6a9018df-6156-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 8 16:03:57.127: INFO: Waiting up to 5m0s for pod "pod-configmaps-6a90f97a-6156-11ea-b38e-0242ac11000f" in namespace "e2e-tests-configmap-mcnst" to be "success or failure" Mar 8 16:03:57.130: INFO: Pod "pod-configmaps-6a90f97a-6156-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.919558ms Mar 8 16:03:59.134: INFO: Pod "pod-configmaps-6a90f97a-6156-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007097207s STEP: Saw pod success Mar 8 16:03:59.134: INFO: Pod "pod-configmaps-6a90f97a-6156-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:03:59.138: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-6a90f97a-6156-11ea-b38e-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 8 16:03:59.156: INFO: Waiting for pod pod-configmaps-6a90f97a-6156-11ea-b38e-0242ac11000f to disappear Mar 8 16:03:59.171: INFO: Pod pod-configmaps-6a90f97a-6156-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:03:59.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-mcnst" for this suite. Mar 8 16:04:05.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:04:05.270: INFO: namespace: e2e-tests-configmap-mcnst, resource: bindings, ignored listing per whitelist Mar 8 16:04:05.274: INFO: namespace e2e-tests-configmap-mcnst deletion completed in 6.099127648s • [SLOW TEST:8.294 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:04:05.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-6f7b75e4-6156-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume secrets Mar 8 16:04:05.385: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6f7da648-6156-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-mbc6n" to be "success or failure" Mar 8 16:04:05.401: INFO: Pod "pod-projected-secrets-6f7da648-6156-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.077337ms Mar 8 16:04:07.405: INFO: Pod "pod-projected-secrets-6f7da648-6156-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019507288s STEP: Saw pod success Mar 8 16:04:07.405: INFO: Pod "pod-projected-secrets-6f7da648-6156-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:04:07.408: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-6f7da648-6156-11ea-b38e-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Mar 8 16:04:07.455: INFO: Waiting for pod pod-projected-secrets-6f7da648-6156-11ea-b38e-0242ac11000f to disappear Mar 8 16:04:07.458: INFO: Pod pod-projected-secrets-6f7da648-6156-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:04:07.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mbc6n" for this suite. Mar 8 16:04:13.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:04:13.489: INFO: namespace: e2e-tests-projected-mbc6n, resource: bindings, ignored listing per whitelist Mar 8 16:04:13.556: INFO: namespace e2e-tests-projected-mbc6n deletion completed in 6.094349445s • [SLOW TEST:8.281 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:04:13.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 8 16:04:13.663: INFO: Waiting up to 5m0s for pod "downwardapi-volume-746ca27a-6156-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-xknmn" to be "success or failure" Mar 8 16:04:13.668: INFO: Pod "downwardapi-volume-746ca27a-6156-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.209161ms Mar 8 16:04:15.673: INFO: Pod "downwardapi-volume-746ca27a-6156-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009722738s Mar 8 16:04:17.677: INFO: Pod "downwardapi-volume-746ca27a-6156-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014081041s STEP: Saw pod success Mar 8 16:04:17.677: INFO: Pod "downwardapi-volume-746ca27a-6156-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:04:17.680: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-746ca27a-6156-11ea-b38e-0242ac11000f container client-container: STEP: delete the pod Mar 8 16:04:17.723: INFO: Waiting for pod downwardapi-volume-746ca27a-6156-11ea-b38e-0242ac11000f to disappear Mar 8 16:04:17.730: INFO: Pod downwardapi-volume-746ca27a-6156-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:04:17.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-xknmn" for this suite. Mar 8 16:04:23.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:04:23.797: INFO: namespace: e2e-tests-projected-xknmn, resource: bindings, ignored listing per whitelist Mar 8 16:04:23.822: INFO: namespace e2e-tests-projected-xknmn deletion completed in 6.088721231s • [SLOW TEST:10.266 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:04:23.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-7a82fb87-6156-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 8 16:04:23.923: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7a89d1f4-6156-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-gcw82" to be "success or failure" Mar 8 16:04:23.928: INFO: Pod "pod-projected-configmaps-7a89d1f4-6156-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.963144ms Mar 8 16:04:25.931: INFO: Pod "pod-projected-configmaps-7a89d1f4-6156-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008632001s STEP: Saw pod success Mar 8 16:04:25.931: INFO: Pod "pod-projected-configmaps-7a89d1f4-6156-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:04:25.935: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-7a89d1f4-6156-11ea-b38e-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 8 16:04:25.960: INFO: Waiting for pod pod-projected-configmaps-7a89d1f4-6156-11ea-b38e-0242ac11000f to disappear Mar 8 16:04:25.963: INFO: Pod pod-projected-configmaps-7a89d1f4-6156-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:04:25.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gcw82" for this suite. Mar 8 16:04:31.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:04:32.038: INFO: namespace: e2e-tests-projected-gcw82, resource: bindings, ignored listing per whitelist Mar 8 16:04:32.064: INFO: namespace e2e-tests-projected-gcw82 deletion completed in 6.097644473s • [SLOW TEST:8.242 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:04:32.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Mar 8 16:04:32.191: INFO: Waiting up to 5m0s for pod "downward-api-7f773d56-6156-11ea-b38e-0242ac11000f" in namespace "e2e-tests-downward-api-wgs6p" to be "success or failure" Mar 8 16:04:32.195: INFO: Pod "downward-api-7f773d56-6156-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.954754ms Mar 8 16:04:34.200: INFO: Pod "downward-api-7f773d56-6156-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008544793s STEP: Saw pod success Mar 8 16:04:34.200: INFO: Pod "downward-api-7f773d56-6156-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:04:34.203: INFO: Trying to get logs from node hunter-worker pod downward-api-7f773d56-6156-11ea-b38e-0242ac11000f container dapi-container: STEP: delete the pod Mar 8 16:04:34.239: INFO: Waiting for pod downward-api-7f773d56-6156-11ea-b38e-0242ac11000f to disappear Mar 8 16:04:34.243: INFO: Pod downward-api-7f773d56-6156-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:04:34.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wgs6p" for this suite. Mar 8 16:04:40.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:04:40.273: INFO: namespace: e2e-tests-downward-api-wgs6p, resource: bindings, ignored listing per whitelist Mar 8 16:04:40.323: INFO: namespace e2e-tests-downward-api-wgs6p deletion completed in 6.076928363s • [SLOW TEST:8.258 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:04:40.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-prk7m.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-prk7m.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-prk7m.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-prk7m.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-prk7m.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-prk7m.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 16:04:54.506: INFO: DNS probes using e2e-tests-dns-prk7m/dns-test-845c04cd-6156-11ea-b38e-0242ac11000f succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:04:54.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-prk7m" for this suite. Mar 8 16:05:00.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:05:00.615: INFO: namespace: e2e-tests-dns-prk7m, resource: bindings, ignored listing per whitelist Mar 8 16:05:00.627: INFO: namespace e2e-tests-dns-prk7m deletion completed in 6.075045279s • [SLOW TEST:20.304 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:05:00.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 8 16:05:05.761: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:05:06.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-s5rv7" for this suite. Mar 8 16:05:28.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:05:28.882: INFO: namespace: e2e-tests-replicaset-s5rv7, resource: bindings, ignored listing per whitelist Mar 8 16:05:28.928: INFO: namespace e2e-tests-replicaset-s5rv7 deletion completed in 22.123249262s • [SLOW TEST:28.301 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:05:28.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 8 16:05:31.569: INFO: Successfully updated pod "annotationupdatea15737f6-6156-11ea-b38e-0242ac11000f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:05:33.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-h7b7h" for this suite. Mar 8 16:05:55.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:05:55.650: INFO: namespace: e2e-tests-projected-h7b7h, resource: bindings, ignored listing per whitelist Mar 8 16:05:55.678: INFO: namespace e2e-tests-projected-h7b7h deletion completed in 22.087516304s • [SLOW TEST:26.750 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:05:55.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Mar 8 16:05:55.757: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix610445519/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:05:55.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-s9cl2" for this suite. Mar 8 16:06:01.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:06:01.886: INFO: namespace: e2e-tests-kubectl-s9cl2, resource: bindings, ignored listing per whitelist Mar 8 16:06:01.905: INFO: namespace e2e-tests-kubectl-s9cl2 deletion completed in 6.082522773s • [SLOW TEST:6.227 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:06:01.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-n9j55 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 8 16:06:01.976: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 8 16:06:22.072: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.166:8080/dial?request=hostName&protocol=http&host=10.244.2.138&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-n9j55 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 16:06:22.072: INFO: >>> kubeConfig: /root/.kube/config I0308 16:06:22.096291 6 log.go:172] (0xc0017d22c0) (0xc002a6aa00) Create stream I0308 16:06:22.096318 6 log.go:172] (0xc0017d22c0) (0xc002a6aa00) Stream added, broadcasting: 1 I0308 16:06:22.097849 6 log.go:172] (0xc0017d22c0) Reply frame received for 1 I0308 16:06:22.097881 6 log.go:172] (0xc0017d22c0) (0xc002185360) Create stream I0308 16:06:22.097891 6 log.go:172] (0xc0017d22c0) (0xc002185360) Stream added, broadcasting: 3 I0308 16:06:22.098632 6 log.go:172] (0xc0017d22c0) Reply frame received for 3 I0308 16:06:22.098656 6 log.go:172] (0xc0017d22c0) (0xc002a6ab40) Create stream I0308 16:06:22.098664 6 log.go:172] (0xc0017d22c0) (0xc002a6ab40) Stream added, broadcasting: 5 I0308 16:06:22.099354 6 log.go:172] (0xc0017d22c0) Reply frame received for 5 I0308 16:06:22.158171 6 log.go:172] (0xc0017d22c0) Data frame received for 3 I0308 16:06:22.158196 6 log.go:172] (0xc002185360) (3) Data frame handling I0308 16:06:22.158209 6 log.go:172] (0xc002185360) (3) Data frame sent I0308 16:06:22.158505 6 log.go:172] (0xc0017d22c0) Data frame received for 3 I0308 16:06:22.158518 6 log.go:172] (0xc002185360) (3) Data frame handling I0308 16:06:22.158776 6 log.go:172] (0xc0017d22c0) Data frame received for 5 I0308 16:06:22.158795 6 log.go:172] (0xc002a6ab40) (5) Data frame handling I0308 16:06:22.159970 6 log.go:172] (0xc0017d22c0) Data frame received for 1 I0308 16:06:22.159997 6 log.go:172] (0xc002a6aa00) (1) Data frame handling I0308 16:06:22.160011 6 log.go:172] (0xc002a6aa00) (1) Data frame sent I0308 16:06:22.160026 6 log.go:172] (0xc0017d22c0) (0xc002a6aa00) Stream removed, broadcasting: 1 I0308 16:06:22.160040 6 log.go:172] (0xc0017d22c0) Go away received I0308 16:06:22.160203 6 log.go:172] (0xc0017d22c0) (0xc002a6aa00) Stream removed, broadcasting: 1 I0308 16:06:22.160225 6 log.go:172] (0xc0017d22c0) (0xc002185360) Stream removed, broadcasting: 3 I0308 16:06:22.160242 6 log.go:172] (0xc0017d22c0) (0xc002a6ab40) Stream removed, broadcasting: 5 Mar 8 16:06:22.160: INFO: Waiting for endpoints: map[] Mar 8 16:06:22.163: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.166:8080/dial?request=hostName&protocol=http&host=10.244.1.165&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-n9j55 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 8 16:06:22.163: INFO: >>> kubeConfig: /root/.kube/config I0308 16:06:22.187830 6 log.go:172] (0xc000b9a580) (0xc0021855e0) Create stream I0308 16:06:22.187851 6 log.go:172] (0xc000b9a580) (0xc0021855e0) Stream added, broadcasting: 1 I0308 16:06:22.191875 6 log.go:172] (0xc000b9a580) Reply frame received for 1 I0308 16:06:22.191904 6 log.go:172] (0xc000b9a580) (0xc0024359a0) Create stream I0308 16:06:22.191914 6 log.go:172] (0xc000b9a580) (0xc0024359a0) Stream added, broadcasting: 3 I0308 16:06:22.192602 6 log.go:172] (0xc000b9a580) Reply frame received for 3 I0308 16:06:22.192647 6 log.go:172] (0xc000b9a580) (0xc002185720) Create stream I0308 16:06:22.192659 6 log.go:172] (0xc000b9a580) (0xc002185720) Stream added, broadcasting: 5 I0308 16:06:22.193669 6 log.go:172] (0xc000b9a580) Reply frame received for 5 I0308 16:06:22.262228 6 log.go:172] (0xc000b9a580) Data frame received for 3 I0308 16:06:22.262252 6 log.go:172] (0xc0024359a0) (3) Data frame handling I0308 16:06:22.262265 6 log.go:172] (0xc0024359a0) (3) Data frame sent I0308 16:06:22.262515 6 log.go:172] (0xc000b9a580) Data frame received for 3 I0308 16:06:22.262538 6 log.go:172] (0xc0024359a0) (3) Data frame handling I0308 16:06:22.262901 6 log.go:172] (0xc000b9a580) Data frame received for 5 I0308 16:06:22.262920 6 log.go:172] (0xc002185720) (5) Data frame handling I0308 16:06:22.263804 6 log.go:172] (0xc000b9a580) Data frame received for 1 I0308 16:06:22.263840 6 log.go:172] (0xc0021855e0) (1) Data frame handling I0308 16:06:22.263876 6 log.go:172] (0xc0021855e0) (1) Data frame sent I0308 16:06:22.263906 6 log.go:172] (0xc000b9a580) (0xc0021855e0) Stream removed, broadcasting: 1 I0308 16:06:22.263937 6 log.go:172] (0xc000b9a580) Go away received I0308 16:06:22.263999 6 log.go:172] (0xc000b9a580) (0xc0021855e0) Stream removed, broadcasting: 1 I0308 16:06:22.264011 6 log.go:172] (0xc000b9a580) (0xc0024359a0) Stream removed, broadcasting: 3 I0308 16:06:22.264016 6 log.go:172] (0xc000b9a580) (0xc002185720) Stream removed, broadcasting: 5 Mar 8 16:06:22.264: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:06:22.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-n9j55" for this suite. Mar 8 16:06:44.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:06:44.304: INFO: namespace: e2e-tests-pod-network-test-n9j55, resource: bindings, ignored listing per whitelist Mar 8 16:06:44.345: INFO: namespace e2e-tests-pod-network-test-n9j55 deletion completed in 22.078984829s • [SLOW TEST:42.440 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:06:44.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-djr2j A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-djr2j;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-djr2j A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-djr2j;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-djr2j.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-djr2j.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-djr2j.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-djr2j.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-djr2j.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-djr2j.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-djr2j.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-djr2j.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-djr2j.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 87.9.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.9.87_udp@PTR;check="$$(dig +tcp +noall +answer +search 87.9.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.9.87_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-djr2j A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-djr2j;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-djr2j A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-djr2j;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-djr2j.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-djr2j.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-djr2j.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-djr2j.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-djr2j.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-djr2j.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-djr2j.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-djr2j.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-djr2j.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 87.9.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.9.87_udp@PTR;check="$$(dig +tcp +noall +answer +search 87.9.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.9.87_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 8 16:06:58.515: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:06:58.521: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-djr2j from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:06:58.558: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:06:58.561: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:06:58.563: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-djr2j from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:06:58.566: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-djr2j from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:06:58.569: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-djr2j.svc from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:06:58.571: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-djr2j.svc from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:06:58.574: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:06:58.576: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:06:58.591: INFO: Lookups using e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-djr2j jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-djr2j jessie_tcp@dns-test-service.e2e-tests-dns-djr2j jessie_udp@dns-test-service.e2e-tests-dns-djr2j.svc jessie_tcp@dns-test-service.e2e-tests-dns-djr2j.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc] Mar 8 16:07:03.595: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:03.604: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-djr2j from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:03.642: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:03.645: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:03.648: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-djr2j from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:03.650: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-djr2j from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:03.653: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-djr2j.svc from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:03.656: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-djr2j.svc from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:03.658: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:03.661: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:03.679: INFO: Lookups using e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-djr2j jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-djr2j jessie_tcp@dns-test-service.e2e-tests-dns-djr2j jessie_udp@dns-test-service.e2e-tests-dns-djr2j.svc jessie_tcp@dns-test-service.e2e-tests-dns-djr2j.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc] Mar 8 16:07:08.596: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:08.603: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-djr2j from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:08.636: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:08.638: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:08.641: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-djr2j from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:08.643: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-djr2j from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:08.646: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-djr2j.svc from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:08.649: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-djr2j.svc from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:08.652: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:08.654: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:08.667: INFO: Lookups using e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-djr2j jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-djr2j jessie_tcp@dns-test-service.e2e-tests-dns-djr2j jessie_udp@dns-test-service.e2e-tests-dns-djr2j.svc jessie_tcp@dns-test-service.e2e-tests-dns-djr2j.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc] Mar 8 16:07:13.595: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:13.601: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-djr2j from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:13.627: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:13.629: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:13.631: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-djr2j from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:13.633: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-djr2j from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:13.635: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-djr2j.svc from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:13.637: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-djr2j.svc from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:13.639: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:13.641: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:13.652: INFO: Lookups using e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-djr2j jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-djr2j jessie_tcp@dns-test-service.e2e-tests-dns-djr2j jessie_udp@dns-test-service.e2e-tests-dns-djr2j.svc jessie_tcp@dns-test-service.e2e-tests-dns-djr2j.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc] Mar 8 16:07:18.595: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:18.602: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-djr2j from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:18.637: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:18.639: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:18.642: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-djr2j from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:18.645: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-djr2j from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:18.648: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-djr2j.svc from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:18.649: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-djr2j.svc from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:18.651: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:18.653: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc from pod e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f: the server could not find the requested resource (get pods dns-test-ce504788-6156-11ea-b38e-0242ac11000f) Mar 8 16:07:18.666: INFO: Lookups using e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-djr2j jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-djr2j jessie_tcp@dns-test-service.e2e-tests-dns-djr2j jessie_udp@dns-test-service.e2e-tests-dns-djr2j.svc jessie_tcp@dns-test-service.e2e-tests-dns-djr2j.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-djr2j.svc] Mar 8 16:07:23.688: INFO: DNS probes using e2e-tests-dns-djr2j/dns-test-ce504788-6156-11ea-b38e-0242ac11000f succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:07:23.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-djr2j" for this suite. Mar 8 16:07:30.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:07:30.176: INFO: namespace: e2e-tests-dns-djr2j, resource: bindings, ignored listing per whitelist Mar 8 16:07:30.191: INFO: namespace e2e-tests-dns-djr2j deletion completed in 6.23186887s • [SLOW TEST:45.846 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:07:30.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:08:30.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-pn4fg" for this suite. Mar 8 16:08:52.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:08:52.324: INFO: namespace: e2e-tests-container-probe-pn4fg, resource: bindings, ignored listing per whitelist Mar 8 16:08:52.379: INFO: namespace e2e-tests-container-probe-pn4fg deletion completed in 22.111946082s • [SLOW TEST:82.187 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:08:52.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 8 16:08:53.086: INFO: Pod name wrapped-volume-race-1af63e93-6157-11ea-b38e-0242ac11000f: Found 0 pods out of 5 Mar 8 16:08:58.096: INFO: Pod name wrapped-volume-race-1af63e93-6157-11ea-b38e-0242ac11000f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1af63e93-6157-11ea-b38e-0242ac11000f in namespace e2e-tests-emptydir-wrapper-6pdkp, will wait for the garbage collector to delete the pods Mar 8 16:11:00.184: INFO: Deleting ReplicationController wrapped-volume-race-1af63e93-6157-11ea-b38e-0242ac11000f took: 7.054012ms Mar 8 16:11:00.284: INFO: Terminating ReplicationController wrapped-volume-race-1af63e93-6157-11ea-b38e-0242ac11000f pods took: 100.214274ms STEP: Creating RC which spawns configmap-volume pods Mar 8 16:11:35.579: INFO: Pod name wrapped-volume-race-7bc6107f-6157-11ea-b38e-0242ac11000f: Found 0 pods out of 5 Mar 8 16:11:40.586: INFO: Pod name wrapped-volume-race-7bc6107f-6157-11ea-b38e-0242ac11000f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7bc6107f-6157-11ea-b38e-0242ac11000f in namespace e2e-tests-emptydir-wrapper-6pdkp, will wait for the garbage collector to delete the pods Mar 8 16:14:14.669: INFO: Deleting ReplicationController wrapped-volume-race-7bc6107f-6157-11ea-b38e-0242ac11000f took: 7.702109ms Mar 8 16:14:14.769: INFO: Terminating ReplicationController wrapped-volume-race-7bc6107f-6157-11ea-b38e-0242ac11000f pods took: 100.207397ms STEP: Creating RC which spawns configmap-volume pods Mar 8 16:14:49.826: INFO: Pod name wrapped-volume-race-ef93951a-6157-11ea-b38e-0242ac11000f: Found 0 pods out of 5 Mar 8 16:14:54.836: INFO: Pod name wrapped-volume-race-ef93951a-6157-11ea-b38e-0242ac11000f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ef93951a-6157-11ea-b38e-0242ac11000f in namespace e2e-tests-emptydir-wrapper-6pdkp, will wait for the garbage collector to delete the pods Mar 8 16:16:48.927: INFO: Deleting ReplicationController wrapped-volume-race-ef93951a-6157-11ea-b38e-0242ac11000f took: 7.24796ms Mar 8 16:16:49.028: INFO: Terminating ReplicationController wrapped-volume-race-ef93951a-6157-11ea-b38e-0242ac11000f pods took: 100.249693ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:17:28.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-6pdkp" for this suite. Mar 8 16:17:36.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:17:36.807: INFO: namespace: e2e-tests-emptydir-wrapper-6pdkp, resource: bindings, ignored listing per whitelist Mar 8 16:17:36.849: INFO: namespace e2e-tests-emptydir-wrapper-6pdkp deletion completed in 8.071408889s • [SLOW TEST:524.470 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:17:36.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-5335872a-6158-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume secrets Mar 8 16:17:37.021: INFO: Waiting up to 5m0s for pod "pod-secrets-5342fdf9-6158-11ea-b38e-0242ac11000f" in namespace "e2e-tests-secrets-8rrcb" to be "success or failure" Mar 8 16:17:37.045: INFO: Pod "pod-secrets-5342fdf9-6158-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 24.094738ms Mar 8 16:17:39.049: INFO: Pod "pod-secrets-5342fdf9-6158-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.027911382s STEP: Saw pod success Mar 8 16:17:39.049: INFO: Pod "pod-secrets-5342fdf9-6158-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:17:39.052: INFO: Trying to get logs from node hunter-worker pod pod-secrets-5342fdf9-6158-11ea-b38e-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 8 16:17:39.114: INFO: Waiting for pod pod-secrets-5342fdf9-6158-11ea-b38e-0242ac11000f to disappear Mar 8 16:17:39.139: INFO: Pod pod-secrets-5342fdf9-6158-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:17:39.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-8rrcb" for this suite. Mar 8 16:17:45.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:17:45.245: INFO: namespace: e2e-tests-secrets-8rrcb, resource: bindings, ignored listing per whitelist Mar 8 16:17:45.280: INFO: namespace e2e-tests-secrets-8rrcb deletion completed in 6.137809519s STEP: Destroying namespace "e2e-tests-secret-namespace-jgvvz" for this suite. Mar 8 16:17:51.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:17:51.342: INFO: namespace: e2e-tests-secret-namespace-jgvvz, resource: bindings, ignored listing per whitelist Mar 8 16:17:51.367: INFO: namespace e2e-tests-secret-namespace-jgvvz deletion completed in 6.086098363s • [SLOW TEST:14.518 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:17:51.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-5bdcf6d5-6158-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume secrets Mar 8 16:17:51.453: INFO: Waiting up to 5m0s for pod "pod-secrets-5bdd5370-6158-11ea-b38e-0242ac11000f" in namespace "e2e-tests-secrets-22ghd" to be "success or failure" Mar 8 16:17:51.457: INFO: Pod "pod-secrets-5bdd5370-6158-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.608086ms Mar 8 16:17:53.462: INFO: Pod "pod-secrets-5bdd5370-6158-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0089391s STEP: Saw pod success Mar 8 16:17:53.462: INFO: Pod "pod-secrets-5bdd5370-6158-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:17:53.465: INFO: Trying to get logs from node hunter-worker pod pod-secrets-5bdd5370-6158-11ea-b38e-0242ac11000f container secret-env-test: STEP: delete the pod Mar 8 16:17:53.503: INFO: Waiting for pod pod-secrets-5bdd5370-6158-11ea-b38e-0242ac11000f to disappear Mar 8 16:17:53.506: INFO: Pod pod-secrets-5bdd5370-6158-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:17:53.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-22ghd" for this suite. Mar 8 16:17:59.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:17:59.575: INFO: namespace: e2e-tests-secrets-22ghd, resource: bindings, ignored listing per whitelist Mar 8 16:17:59.599: INFO: namespace e2e-tests-secrets-22ghd deletion completed in 6.089390923s • [SLOW TEST:8.232 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:17:59.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-projected-mn24 STEP: Creating a pod to test atomic-volume-subpath Mar 8 16:17:59.711: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-mn24" in namespace "e2e-tests-subpath-7j2nd" to be "success or failure" Mar 8 16:17:59.732: INFO: Pod "pod-subpath-test-projected-mn24": Phase="Pending", Reason="", readiness=false. Elapsed: 20.301272ms Mar 8 16:18:01.736: INFO: Pod "pod-subpath-test-projected-mn24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024508011s Mar 8 16:18:03.740: INFO: Pod "pod-subpath-test-projected-mn24": Phase="Running", Reason="", readiness=false. Elapsed: 4.028703182s Mar 8 16:18:05.745: INFO: Pod "pod-subpath-test-projected-mn24": Phase="Running", Reason="", readiness=false. Elapsed: 6.033456889s Mar 8 16:18:07.749: INFO: Pod "pod-subpath-test-projected-mn24": Phase="Running", Reason="", readiness=false. Elapsed: 8.038063797s Mar 8 16:18:09.754: INFO: Pod "pod-subpath-test-projected-mn24": Phase="Running", Reason="", readiness=false. Elapsed: 10.042485365s Mar 8 16:18:11.758: INFO: Pod "pod-subpath-test-projected-mn24": Phase="Running", Reason="", readiness=false. Elapsed: 12.047020553s Mar 8 16:18:13.763: INFO: Pod "pod-subpath-test-projected-mn24": Phase="Running", Reason="", readiness=false. Elapsed: 14.051542597s Mar 8 16:18:15.767: INFO: Pod "pod-subpath-test-projected-mn24": Phase="Running", Reason="", readiness=false. Elapsed: 16.055729377s Mar 8 16:18:17.772: INFO: Pod "pod-subpath-test-projected-mn24": Phase="Running", Reason="", readiness=false. Elapsed: 18.060411416s Mar 8 16:18:19.776: INFO: Pod "pod-subpath-test-projected-mn24": Phase="Running", Reason="", readiness=false. Elapsed: 20.06499214s Mar 8 16:18:21.780: INFO: Pod "pod-subpath-test-projected-mn24": Phase="Running", Reason="", readiness=false. Elapsed: 22.069052871s Mar 8 16:18:23.784: INFO: Pod "pod-subpath-test-projected-mn24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.07316332s STEP: Saw pod success Mar 8 16:18:23.784: INFO: Pod "pod-subpath-test-projected-mn24" satisfied condition "success or failure" Mar 8 16:18:23.787: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-mn24 container test-container-subpath-projected-mn24: STEP: delete the pod Mar 8 16:18:23.836: INFO: Waiting for pod pod-subpath-test-projected-mn24 to disappear Mar 8 16:18:23.842: INFO: Pod pod-subpath-test-projected-mn24 no longer exists STEP: Deleting pod pod-subpath-test-projected-mn24 Mar 8 16:18:23.842: INFO: Deleting pod "pod-subpath-test-projected-mn24" in namespace "e2e-tests-subpath-7j2nd" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:18:23.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-7j2nd" for this suite. Mar 8 16:18:29.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:18:29.951: INFO: namespace: e2e-tests-subpath-7j2nd, resource: bindings, ignored listing per whitelist Mar 8 16:18:29.982: INFO: namespace e2e-tests-subpath-7j2nd deletion completed in 6.13330857s • [SLOW TEST:30.383 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:18:29.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 8 16:18:30.113: INFO: Waiting up to 5m0s for pod "downwardapi-volume-72e79af7-6158-11ea-b38e-0242ac11000f" in namespace "e2e-tests-downward-api-hhsb6" to be "success or failure" Mar 8 16:18:30.117: INFO: Pod "downwardapi-volume-72e79af7-6158-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204035ms Mar 8 16:18:32.120: INFO: Pod "downwardapi-volume-72e79af7-6158-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007201676s Mar 8 16:18:34.124: INFO: Pod "downwardapi-volume-72e79af7-6158-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011278823s STEP: Saw pod success Mar 8 16:18:34.124: INFO: Pod "downwardapi-volume-72e79af7-6158-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:18:34.127: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-72e79af7-6158-11ea-b38e-0242ac11000f container client-container: STEP: delete the pod Mar 8 16:18:34.167: INFO: Waiting for pod downwardapi-volume-72e79af7-6158-11ea-b38e-0242ac11000f to disappear Mar 8 16:18:34.189: INFO: Pod downwardapi-volume-72e79af7-6158-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:18:34.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-hhsb6" for this suite. Mar 8 16:18:40.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:18:40.267: INFO: namespace: e2e-tests-downward-api-hhsb6, resource: bindings, ignored listing per whitelist Mar 8 16:18:40.297: INFO: namespace e2e-tests-downward-api-hhsb6 deletion completed in 6.10468919s • [SLOW TEST:10.314 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:18:40.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Mar 8 16:18:40.375: INFO: Waiting up to 5m0s for pod "var-expansion-7906173e-6158-11ea-b38e-0242ac11000f" in namespace "e2e-tests-var-expansion-knvcf" to be "success or failure" Mar 8 16:18:40.379: INFO: Pod "var-expansion-7906173e-6158-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118572ms Mar 8 16:18:42.384: INFO: Pod "var-expansion-7906173e-6158-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008244331s STEP: Saw pod success Mar 8 16:18:42.384: INFO: Pod "var-expansion-7906173e-6158-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:18:42.386: INFO: Trying to get logs from node hunter-worker pod var-expansion-7906173e-6158-11ea-b38e-0242ac11000f container dapi-container: STEP: delete the pod Mar 8 16:18:42.405: INFO: Waiting for pod var-expansion-7906173e-6158-11ea-b38e-0242ac11000f to disappear Mar 8 16:18:42.409: INFO: Pod var-expansion-7906173e-6158-11ea-b38e-0242ac11000f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:18:42.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-knvcf" for this suite. Mar 8 16:18:48.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:18:48.564: INFO: namespace: e2e-tests-var-expansion-knvcf, resource: bindings, ignored listing per whitelist Mar 8 16:18:48.576: INFO: namespace e2e-tests-var-expansion-knvcf deletion completed in 6.163768992s • [SLOW TEST:8.279 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:18:48.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-7dfa583f-6158-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 8 16:18:48.693: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7dfb4788-6158-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-x4qfv" to be "success or failure" Mar 8 16:18:48.697: INFO: Pod "pod-projected-configmaps-7dfb4788-6158-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.65374ms Mar 8 16:18:50.701: INFO: Pod "pod-projected-configmaps-7dfb4788-6158-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007180042s Mar 8 16:18:52.704: INFO: Pod "pod-projected-configmaps-7dfb4788-6158-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010987724s STEP: Saw pod success Mar 8 16:18:52.705: INFO: Pod "pod-projected-configmaps-7dfb4788-6158-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:18:52.707: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-7dfb4788-6158-11ea-b38e-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Mar 8 16:18:52.731: INFO: Waiting for pod pod-projected-configmaps-7dfb4788-6158-11ea-b38e-0242ac11000f to disappear Mar 8 16:18:52.769: INFO: Pod pod-projected-configmaps-7dfb4788-6158-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:18:52.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-x4qfv" for this suite. Mar 8 16:18:58.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:18:58.833: INFO: namespace: e2e-tests-projected-x4qfv, resource: bindings, ignored listing per whitelist Mar 8 16:18:58.847: INFO: namespace e2e-tests-projected-x4qfv deletion completed in 6.073760803s • [SLOW TEST:10.271 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:18:58.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 8 16:18:58.959: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 16:18:59.035: INFO: Waiting for terminating namespaces to be deleted... Mar 8 16:18:59.037: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Mar 8 16:18:59.041: INFO: kindnet-jjqmp from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 8 16:18:59.041: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 16:18:59.041: INFO: kube-proxy-h66sh from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 8 16:18:59.041: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 16:18:59.041: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Mar 8 16:18:59.044: INFO: kindnet-nwqfj from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 8 16:18:59.044: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 16:18:59.044: INFO: kube-proxy-chv9d from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 8 16:18:59.044: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fa608401c017c8], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:19:00.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-mjzz4" for this suite. Mar 8 16:19:06.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:19:06.103: INFO: namespace: e2e-tests-sched-pred-mjzz4, resource: bindings, ignored listing per whitelist Mar 8 16:19:06.153: INFO: namespace e2e-tests-sched-pred-mjzz4 deletion completed in 6.090491386s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:7.306 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:19:06.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 8 16:19:06.364: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8882b36a-6158-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-cj4hh" to be "success or failure" Mar 8 16:19:06.380: INFO: Pod "downwardapi-volume-8882b36a-6158-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.79987ms Mar 8 16:19:08.384: INFO: Pod "downwardapi-volume-8882b36a-6158-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019294595s STEP: Saw pod success Mar 8 16:19:08.384: INFO: Pod "downwardapi-volume-8882b36a-6158-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:19:08.386: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-8882b36a-6158-11ea-b38e-0242ac11000f container client-container: STEP: delete the pod Mar 8 16:19:08.421: INFO: Waiting for pod downwardapi-volume-8882b36a-6158-11ea-b38e-0242ac11000f to disappear Mar 8 16:19:08.429: INFO: Pod downwardapi-volume-8882b36a-6158-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:19:08.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cj4hh" for this suite. Mar 8 16:19:14.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:19:14.536: INFO: namespace: e2e-tests-projected-cj4hh, resource: bindings, ignored listing per whitelist Mar 8 16:19:14.546: INFO: namespace e2e-tests-projected-cj4hh deletion completed in 6.113588653s • [SLOW TEST:8.393 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:19:14.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's args Mar 8 16:19:14.754: INFO: Waiting up to 5m0s for pod "var-expansion-8d84448b-6158-11ea-b38e-0242ac11000f" in namespace "e2e-tests-var-expansion-g74m9" to be "success or failure" Mar 8 16:19:14.769: INFO: Pod "var-expansion-8d84448b-6158-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.604019ms Mar 8 16:19:16.774: INFO: Pod "var-expansion-8d84448b-6158-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019716065s Mar 8 16:19:18.777: INFO: Pod "var-expansion-8d84448b-6158-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023297314s STEP: Saw pod success Mar 8 16:19:18.777: INFO: Pod "var-expansion-8d84448b-6158-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:19:18.780: INFO: Trying to get logs from node hunter-worker pod var-expansion-8d84448b-6158-11ea-b38e-0242ac11000f container dapi-container: STEP: delete the pod Mar 8 16:19:18.875: INFO: Waiting for pod var-expansion-8d84448b-6158-11ea-b38e-0242ac11000f to disappear Mar 8 16:19:18.879: INFO: Pod var-expansion-8d84448b-6158-11ea-b38e-0242ac11000f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:19:18.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-g74m9" for this suite. Mar 8 16:19:24.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:19:24.930: INFO: namespace: e2e-tests-var-expansion-g74m9, resource: bindings, ignored listing per whitelist Mar 8 16:19:24.985: INFO: namespace e2e-tests-var-expansion-g74m9 deletion completed in 6.103232083s • [SLOW TEST:10.439 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:19:24.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-93aa412f-6158-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 8 16:19:25.114: INFO: Waiting up to 5m0s for pod "pod-configmaps-93ac2e1d-6158-11ea-b38e-0242ac11000f" in namespace "e2e-tests-configmap-frpsf" to be "success or failure" Mar 8 16:19:25.148: INFO: Pod "pod-configmaps-93ac2e1d-6158-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 33.858074ms Mar 8 16:19:27.151: INFO: Pod "pod-configmaps-93ac2e1d-6158-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036950582s Mar 8 16:19:29.155: INFO: Pod "pod-configmaps-93ac2e1d-6158-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040837905s STEP: Saw pod success Mar 8 16:19:29.155: INFO: Pod "pod-configmaps-93ac2e1d-6158-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:19:29.157: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-93ac2e1d-6158-11ea-b38e-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 8 16:19:29.247: INFO: Waiting for pod pod-configmaps-93ac2e1d-6158-11ea-b38e-0242ac11000f to disappear Mar 8 16:19:29.275: INFO: Pod pod-configmaps-93ac2e1d-6158-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:19:29.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-frpsf" for this suite. Mar 8 16:19:35.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:19:35.308: INFO: namespace: e2e-tests-configmap-frpsf, resource: bindings, ignored listing per whitelist Mar 8 16:19:35.363: INFO: namespace e2e-tests-configmap-frpsf deletion completed in 6.084435654s • [SLOW TEST:10.377 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:19:35.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 8 16:19:35.549: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"99e1149f-6158-11ea-9978-0242ac11000d", Controller:(*bool)(0xc002a47dce), BlockOwnerDeletion:(*bool)(0xc002a47dcf)}} Mar 8 16:19:35.656: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"99dd19e9-6158-11ea-9978-0242ac11000d", Controller:(*bool)(0xc00270de0e), BlockOwnerDeletion:(*bool)(0xc00270de0f)}} Mar 8 16:19:35.682: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"99dd7094-6158-11ea-9978-0242ac11000d", Controller:(*bool)(0xc002bc9f96), BlockOwnerDeletion:(*bool)(0xc002bc9f97)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:19:40.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-rhz9v" for this suite. Mar 8 16:19:46.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:19:46.748: INFO: namespace: e2e-tests-gc-rhz9v, resource: bindings, ignored listing per whitelist Mar 8 16:19:46.797: INFO: namespace e2e-tests-gc-rhz9v deletion completed in 6.081402649s • [SLOW TEST:11.433 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:19:46.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-a0b185e6-6158-11ea-b38e-0242ac11000f STEP: Creating configMap with name cm-test-opt-upd-a0b18643-6158-11ea-b38e-0242ac11000f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a0b185e6-6158-11ea-b38e-0242ac11000f STEP: Updating configmap cm-test-opt-upd-a0b18643-6158-11ea-b38e-0242ac11000f STEP: Creating configMap with name cm-test-opt-create-a0b18662-6158-11ea-b38e-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:19:55.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-bdv27" for this suite. Mar 8 16:20:17.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:20:17.044: INFO: namespace: e2e-tests-configmap-bdv27, resource: bindings, ignored listing per whitelist Mar 8 16:20:17.111: INFO: namespace e2e-tests-configmap-bdv27 deletion completed in 22.094930447s • [SLOW TEST:30.314 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:20:17.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 8 16:20:17.210: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b2bda5a0-6158-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-r8265" to be "success or failure" Mar 8 16:20:17.227: INFO: Pod "downwardapi-volume-b2bda5a0-6158-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.880022ms Mar 8 16:20:19.232: INFO: Pod "downwardapi-volume-b2bda5a0-6158-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021508897s Mar 8 16:20:21.236: INFO: Pod "downwardapi-volume-b2bda5a0-6158-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025421647s STEP: Saw pod success Mar 8 16:20:21.236: INFO: Pod "downwardapi-volume-b2bda5a0-6158-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:20:21.238: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-b2bda5a0-6158-11ea-b38e-0242ac11000f container client-container: STEP: delete the pod Mar 8 16:20:21.260: INFO: Waiting for pod downwardapi-volume-b2bda5a0-6158-11ea-b38e-0242ac11000f to disappear Mar 8 16:20:21.264: INFO: Pod downwardapi-volume-b2bda5a0-6158-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:20:21.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r8265" for this suite. Mar 8 16:20:27.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:20:27.323: INFO: namespace: e2e-tests-projected-r8265, resource: bindings, ignored listing per whitelist Mar 8 16:20:27.352: INFO: namespace e2e-tests-projected-r8265 deletion completed in 6.084194098s • [SLOW TEST:10.240 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:20:27.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-secret-mc7x STEP: Creating a pod to test atomic-volume-subpath Mar 8 16:20:27.451: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-mc7x" in namespace "e2e-tests-subpath-rbxmm" to be "success or failure" Mar 8 16:20:27.485: INFO: Pod "pod-subpath-test-secret-mc7x": Phase="Pending", Reason="", readiness=false. Elapsed: 33.195001ms Mar 8 16:20:29.488: INFO: Pod "pod-subpath-test-secret-mc7x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036512018s Mar 8 16:20:31.491: INFO: Pod "pod-subpath-test-secret-mc7x": Phase="Running", Reason="", readiness=false. Elapsed: 4.040034341s Mar 8 16:20:33.497: INFO: Pod "pod-subpath-test-secret-mc7x": Phase="Running", Reason="", readiness=false. Elapsed: 6.045634825s Mar 8 16:20:35.527: INFO: Pod "pod-subpath-test-secret-mc7x": Phase="Running", Reason="", readiness=false. Elapsed: 8.075901744s Mar 8 16:20:37.530: INFO: Pod "pod-subpath-test-secret-mc7x": Phase="Running", Reason="", readiness=false. Elapsed: 10.078424977s Mar 8 16:20:39.539: INFO: Pod "pod-subpath-test-secret-mc7x": Phase="Running", Reason="", readiness=false. Elapsed: 12.087414382s Mar 8 16:20:41.542: INFO: Pod "pod-subpath-test-secret-mc7x": Phase="Running", Reason="", readiness=false. Elapsed: 14.090760347s Mar 8 16:20:43.545: INFO: Pod "pod-subpath-test-secret-mc7x": Phase="Running", Reason="", readiness=false. Elapsed: 16.093703362s Mar 8 16:20:45.549: INFO: Pod "pod-subpath-test-secret-mc7x": Phase="Running", Reason="", readiness=false. Elapsed: 18.097725583s Mar 8 16:20:47.552: INFO: Pod "pod-subpath-test-secret-mc7x": Phase="Running", Reason="", readiness=false. Elapsed: 20.101108221s Mar 8 16:20:49.588: INFO: Pod "pod-subpath-test-secret-mc7x": Phase="Running", Reason="", readiness=false. Elapsed: 22.136945743s Mar 8 16:20:51.592: INFO: Pod "pod-subpath-test-secret-mc7x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.141049383s STEP: Saw pod success Mar 8 16:20:51.592: INFO: Pod "pod-subpath-test-secret-mc7x" satisfied condition "success or failure" Mar 8 16:20:51.596: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-mc7x container test-container-subpath-secret-mc7x: STEP: delete the pod Mar 8 16:20:51.680: INFO: Waiting for pod pod-subpath-test-secret-mc7x to disappear Mar 8 16:20:51.712: INFO: Pod pod-subpath-test-secret-mc7x no longer exists STEP: Deleting pod pod-subpath-test-secret-mc7x Mar 8 16:20:51.712: INFO: Deleting pod "pod-subpath-test-secret-mc7x" in namespace "e2e-tests-subpath-rbxmm" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:20:51.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-rbxmm" for this suite. Mar 8 16:20:57.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:20:57.779: INFO: namespace: e2e-tests-subpath-rbxmm, resource: bindings, ignored listing per whitelist Mar 8 16:20:57.824: INFO: namespace e2e-tests-subpath-rbxmm deletion completed in 6.10728148s • [SLOW TEST:30.472 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:20:57.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 8 16:21:02.061: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 16:21:02.086: INFO: Pod pod-with-poststart-http-hook still exists Mar 8 16:21:04.086: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 16:21:04.091: INFO: Pod pod-with-poststart-http-hook still exists Mar 8 16:21:06.086: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 16:21:06.090: INFO: Pod pod-with-poststart-http-hook still exists Mar 8 16:21:08.086: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 8 16:21:08.090: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:21:08.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-v2hxb" for this suite. Mar 8 16:21:30.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:21:30.209: INFO: namespace: e2e-tests-container-lifecycle-hook-v2hxb, resource: bindings, ignored listing per whitelist Mar 8 16:21:30.211: INFO: namespace e2e-tests-container-lifecycle-hook-v2hxb deletion completed in 22.117292058s • [SLOW TEST:32.387 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:21:30.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-de4fcbca-6158-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume secrets Mar 8 16:21:30.324: INFO: Waiting up to 5m0s for pod "pod-secrets-de515fe5-6158-11ea-b38e-0242ac11000f" in namespace "e2e-tests-secrets-tsmj9" to be "success or failure" Mar 8 16:21:30.390: INFO: Pod "pod-secrets-de515fe5-6158-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 65.650476ms Mar 8 16:21:32.394: INFO: Pod "pod-secrets-de515fe5-6158-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.069646503s STEP: Saw pod success Mar 8 16:21:32.394: INFO: Pod "pod-secrets-de515fe5-6158-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:21:32.396: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-de515fe5-6158-11ea-b38e-0242ac11000f container secret-volume-test: STEP: delete the pod Mar 8 16:21:32.429: INFO: Waiting for pod pod-secrets-de515fe5-6158-11ea-b38e-0242ac11000f to disappear Mar 8 16:21:32.434: INFO: Pod pod-secrets-de515fe5-6158-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:21:32.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-tsmj9" for this suite. Mar 8 16:21:38.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:21:38.512: INFO: namespace: e2e-tests-secrets-tsmj9, resource: bindings, ignored listing per whitelist Mar 8 16:21:38.517: INFO: namespace e2e-tests-secrets-tsmj9 deletion completed in 6.078606503s • [SLOW TEST:8.305 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:21:38.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-e34f2a69-6158-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume secrets Mar 8 16:21:38.693: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e34f9992-6158-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-p4dsl" to be "success or failure" Mar 8 16:21:38.709: INFO: Pod "pod-projected-secrets-e34f9992-6158-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.32257ms Mar 8 16:21:40.713: INFO: Pod "pod-projected-secrets-e34f9992-6158-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020283739s STEP: Saw pod success Mar 8 16:21:40.713: INFO: Pod "pod-projected-secrets-e34f9992-6158-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:21:40.715: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-e34f9992-6158-11ea-b38e-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Mar 8 16:21:40.728: INFO: Waiting for pod pod-projected-secrets-e34f9992-6158-11ea-b38e-0242ac11000f to disappear Mar 8 16:21:40.761: INFO: Pod pod-projected-secrets-e34f9992-6158-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:21:40.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-p4dsl" for this suite. Mar 8 16:21:46.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:21:46.860: INFO: namespace: e2e-tests-projected-p4dsl, resource: bindings, ignored listing per whitelist Mar 8 16:21:46.893: INFO: namespace e2e-tests-projected-p4dsl deletion completed in 6.129090061s • [SLOW TEST:8.376 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:21:46.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 Mar 8 16:21:47.031: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 8 16:21:47.050: INFO: Waiting for terminating namespaces to be deleted... Mar 8 16:21:47.051: INFO: Logging pods the kubelet thinks is on node hunter-worker before test Mar 8 16:21:47.054: INFO: kindnet-jjqmp from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 8 16:21:47.054: INFO: Container kindnet-cni ready: true, restart count 0 Mar 8 16:21:47.054: INFO: kube-proxy-h66sh from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 8 16:21:47.054: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 16:21:47.054: INFO: Logging pods the kubelet thinks is on node hunter-worker2 before test Mar 8 16:21:47.056: INFO: kube-proxy-chv9d from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 8 16:21:47.057: INFO: Container kube-proxy ready: true, restart count 0 Mar 8 16:21:47.057: INFO: kindnet-nwqfj from kube-system started at 2020-03-08 14:42:38 +0000 UTC (1 container statuses recorded) Mar 8 16:21:47.057: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: verifying the node has the label node hunter-worker STEP: verifying the node has the label node hunter-worker2 Mar 8 16:21:47.162: INFO: Pod kindnet-jjqmp requesting resource cpu=100m on Node hunter-worker Mar 8 16:21:47.162: INFO: Pod kindnet-nwqfj requesting resource cpu=100m on Node hunter-worker2 Mar 8 16:21:47.162: INFO: Pod kube-proxy-chv9d requesting resource cpu=0m on Node hunter-worker2 Mar 8 16:21:47.162: INFO: Pod kube-proxy-h66sh requesting resource cpu=0m on Node hunter-worker STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-e85caf23-6158-11ea-b38e-0242ac11000f.15fa60ab2720101b], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-95jqg/filler-pod-e85caf23-6158-11ea-b38e-0242ac11000f to hunter-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-e85caf23-6158-11ea-b38e-0242ac11000f.15fa60ab59a6621c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e85caf23-6158-11ea-b38e-0242ac11000f.15fa60ab6c6c3d88], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-e85caf23-6158-11ea-b38e-0242ac11000f.15fa60ab7a21e570], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Normal], Name = [filler-pod-e85d631f-6158-11ea-b38e-0242ac11000f.15fa60ab291b6785], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-95jqg/filler-pod-e85d631f-6158-11ea-b38e-0242ac11000f to hunter-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-e85d631f-6158-11ea-b38e-0242ac11000f.15fa60ab5f6f383d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e85d631f-6158-11ea-b38e-0242ac11000f.15fa60ab7175539f], Reason = [Created], Message = [Created container] STEP: Considering event: Type = [Normal], Name = [filler-pod-e85d631f-6158-11ea-b38e-0242ac11000f.15fa60ab7dc25b7d], Reason = [Started], Message = [Started container] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fa60ac18a1d25c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node hunter-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node hunter-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:21:52.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-sched-pred-95jqg" for this suite. Mar 8 16:21:58.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:21:58.397: INFO: namespace: e2e-tests-sched-pred-95jqg, resource: bindings, ignored listing per whitelist Mar 8 16:21:58.458: INFO: namespace e2e-tests-sched-pred-95jqg deletion completed in 6.097429402s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 • [SLOW TEST:11.565 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:21:58.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Mar 8 16:21:58.567: INFO: Waiting up to 5m0s for pod "client-containers-ef27c224-6158-11ea-b38e-0242ac11000f" in namespace "e2e-tests-containers-nq2tm" to be "success or failure" Mar 8 16:21:58.571: INFO: Pod "client-containers-ef27c224-6158-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021802ms Mar 8 16:22:00.575: INFO: Pod "client-containers-ef27c224-6158-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007831488s Mar 8 16:22:02.580: INFO: Pod "client-containers-ef27c224-6158-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012066684s STEP: Saw pod success Mar 8 16:22:02.580: INFO: Pod "client-containers-ef27c224-6158-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:22:02.583: INFO: Trying to get logs from node hunter-worker2 pod client-containers-ef27c224-6158-11ea-b38e-0242ac11000f container test-container: STEP: delete the pod Mar 8 16:22:02.604: INFO: Waiting for pod client-containers-ef27c224-6158-11ea-b38e-0242ac11000f to disappear Mar 8 16:22:02.608: INFO: Pod client-containers-ef27c224-6158-11ea-b38e-0242ac11000f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:22:02.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-nq2tm" for this suite. Mar 8 16:22:08.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:22:08.638: INFO: namespace: e2e-tests-containers-nq2tm, resource: bindings, ignored listing per whitelist Mar 8 16:22:08.697: INFO: namespace e2e-tests-containers-nq2tm deletion completed in 6.085357636s • [SLOW TEST:10.239 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:22:08.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-f53e6907-6158-11ea-b38e-0242ac11000f STEP: Creating a pod to test consume configMaps Mar 8 16:22:08.787: INFO: Waiting up to 5m0s for pod "pod-configmaps-f53f2953-6158-11ea-b38e-0242ac11000f" in namespace "e2e-tests-configmap-8trn5" to be "success or failure" Mar 8 16:22:08.797: INFO: Pod "pod-configmaps-f53f2953-6158-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.622876ms Mar 8 16:22:10.800: INFO: Pod "pod-configmaps-f53f2953-6158-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013281338s STEP: Saw pod success Mar 8 16:22:10.800: INFO: Pod "pod-configmaps-f53f2953-6158-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:22:10.804: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-f53f2953-6158-11ea-b38e-0242ac11000f container configmap-volume-test: STEP: delete the pod Mar 8 16:22:10.839: INFO: Waiting for pod pod-configmaps-f53f2953-6158-11ea-b38e-0242ac11000f to disappear Mar 8 16:22:10.845: INFO: Pod pod-configmaps-f53f2953-6158-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:22:10.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8trn5" for this suite. Mar 8 16:22:16.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:22:16.909: INFO: namespace: e2e-tests-configmap-8trn5, resource: bindings, ignored listing per whitelist Mar 8 16:22:16.943: INFO: namespace e2e-tests-configmap-8trn5 deletion completed in 6.095044483s • [SLOW TEST:8.246 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:22:16.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:22:21.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-v8tkg" for this suite. Mar 8 16:22:27.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:22:27.198: INFO: namespace: e2e-tests-emptydir-wrapper-v8tkg, resource: bindings, ignored listing per whitelist Mar 8 16:22:27.240: INFO: namespace e2e-tests-emptydir-wrapper-v8tkg deletion completed in 6.085028432s • [SLOW TEST:10.296 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:22:27.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Mar 8 16:22:27.372: INFO: Waiting up to 5m0s for pod "downwardapi-volume-005387e0-6159-11ea-b38e-0242ac11000f" in namespace "e2e-tests-projected-r79fq" to be "success or failure" Mar 8 16:22:27.375: INFO: Pod "downwardapi-volume-005387e0-6159-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.85717ms Mar 8 16:22:29.378: INFO: Pod "downwardapi-volume-005387e0-6159-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006153605s STEP: Saw pod success Mar 8 16:22:29.378: INFO: Pod "downwardapi-volume-005387e0-6159-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:22:29.380: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-005387e0-6159-11ea-b38e-0242ac11000f container client-container: STEP: delete the pod Mar 8 16:22:29.425: INFO: Waiting for pod downwardapi-volume-005387e0-6159-11ea-b38e-0242ac11000f to disappear Mar 8 16:22:29.429: INFO: Pod downwardapi-volume-005387e0-6159-11ea-b38e-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:22:29.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-r79fq" for this suite. Mar 8 16:22:35.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:22:35.529: INFO: namespace: e2e-tests-projected-r79fq, resource: bindings, ignored listing per whitelist Mar 8 16:22:35.538: INFO: namespace e2e-tests-projected-r79fq deletion completed in 6.107118216s • [SLOW TEST:8.299 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:22:35.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:22:35.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-gxrdp" for this suite. Mar 8 16:22:57.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:22:57.762: INFO: namespace: e2e-tests-pods-gxrdp, resource: bindings, ignored listing per whitelist Mar 8 16:22:57.797: INFO: namespace e2e-tests-pods-gxrdp deletion completed in 22.112582347s • [SLOW TEST:22.258 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:22:57.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating api versions Mar 8 16:22:57.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 8 16:22:58.058: INFO: stderr: "" Mar 8 16:22:58.058: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:22:58.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-wzbl6" for this suite. Mar 8 16:23:04.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:23:04.107: INFO: namespace: e2e-tests-kubectl-wzbl6, resource: bindings, ignored listing per whitelist Mar 8 16:23:04.155: INFO: namespace e2e-tests-kubectl-wzbl6 deletion completed in 6.091990616s • [SLOW TEST:6.358 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:23:04.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Mar 8 16:23:06.815: INFO: Successfully updated pod "annotationupdate16545bb6-6159-11ea-b38e-0242ac11000f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:23:08.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-s9fkc" for this suite. Mar 8 16:23:30.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:23:30.898: INFO: namespace: e2e-tests-downward-api-s9fkc, resource: bindings, ignored listing per whitelist Mar 8 16:23:30.948: INFO: namespace e2e-tests-downward-api-s9fkc deletion completed in 22.108184024s • [SLOW TEST:26.793 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:23:30.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Mar 8 16:23:33.119: INFO: Waiting up to 5m0s for pod "client-envvars-2780d1db-6159-11ea-b38e-0242ac11000f" in namespace "e2e-tests-pods-9q5x9" to be "success or failure" Mar 8 16:23:33.188: INFO: Pod "client-envvars-2780d1db-6159-11ea-b38e-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 69.549653ms Mar 8 16:23:35.193: INFO: Pod "client-envvars-2780d1db-6159-11ea-b38e-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.074169836s STEP: Saw pod success Mar 8 16:23:35.193: INFO: Pod "client-envvars-2780d1db-6159-11ea-b38e-0242ac11000f" satisfied condition "success or failure" Mar 8 16:23:35.196: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-2780d1db-6159-11ea-b38e-0242ac11000f container env3cont: STEP: delete the pod Mar 8 16:23:35.220: INFO: Waiting for pod client-envvars-2780d1db-6159-11ea-b38e-0242ac11000f to disappear Mar 8 16:23:35.225: INFO: Pod client-envvars-2780d1db-6159-11ea-b38e-0242ac11000f no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:23:35.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-9q5x9" for this suite. Mar 8 16:24:13.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:24:13.407: INFO: namespace: e2e-tests-pods-9q5x9, resource: bindings, ignored listing per whitelist Mar 8 16:24:13.441: INFO: namespace e2e-tests-pods-9q5x9 deletion completed in 38.212995111s • [SLOW TEST:42.493 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:24:13.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:24:13.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-49rnn" for this suite. Mar 8 16:24:19.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:24:19.749: INFO: namespace: e2e-tests-kubelet-test-49rnn, resource: bindings, ignored listing per whitelist Mar 8 16:24:19.775: INFO: namespace e2e-tests-kubelet-test-49rnn deletion completed in 6.123554818s • [SLOW TEST:6.334 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:24:19.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed Mar 8 16:24:21.891: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-4360a5d0-6159-11ea-b38e-0242ac11000f", GenerateName:"", Namespace:"e2e-tests-pods-bbftv", SelfLink:"/api/v1/namespaces/e2e-tests-pods-bbftv/pods/pod-submit-remove-4360a5d0-6159-11ea-b38e-0242ac11000f", UID:"43624cf4-6159-11ea-9978-0242ac11000d", ResourceVersion:"22939", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719281459, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"861253087"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-256nf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0013e6240), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-256nf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002c522a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001f66b40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002c52300)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002c52320)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002c52328), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002c5232c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719281459, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719281461, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719281461, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719281459, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.12", PodIP:"10.244.1.197", StartTime:(*v1.Time)(0xc001ebfd60), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc001ebfd80), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://d95814c9135f7b6fd31c9bd6b67b3dbbed0c3c7d502e8444e5e879fde04b04ab"}}, QOSClass:"BestEffort"}} STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:24:27.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-bbftv" for this suite. Mar 8 16:24:33.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:24:34.024: INFO: namespace: e2e-tests-pods-bbftv, resource: bindings, ignored listing per whitelist Mar 8 16:24:34.033: INFO: namespace e2e-tests-pods-bbftv deletion completed in 6.088121744s • [SLOW TEST:14.258 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Mar 8 16:24:34.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 STEP: creating the pod Mar 8 16:24:34.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-w6mwd' Mar 8 16:24:35.840: INFO: stderr: "" Mar 8 16:24:35.840: INFO: stdout: "pod/pause created\n" Mar 8 16:24:35.840: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 8 16:24:35.840: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-w6mwd" to be "running and ready" Mar 8 16:24:35.856: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 15.4106ms Mar 8 16:24:37.860: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.019531602s Mar 8 16:24:37.860: INFO: Pod "pause" satisfied condition "running and ready" Mar 8 16:24:37.860: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: adding the label testing-label with value testing-label-value to a pod Mar 8 16:24:37.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-w6mwd' Mar 8 16:24:37.985: INFO: stderr: "" Mar 8 16:24:37.985: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 8 16:24:37.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-w6mwd' Mar 8 16:24:38.085: INFO: stderr: "" Mar 8 16:24:38.085: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 8 16:24:38.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-w6mwd' Mar 8 16:24:38.174: INFO: stderr: "" Mar 8 16:24:38.174: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 8 16:24:38.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-w6mwd' Mar 8 16:24:38.251: INFO: stderr: "" Mar 8 16:24:38.251: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 STEP: using delete to clean up resources Mar 8 16:24:38.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-w6mwd' Mar 8 16:24:38.341: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 8 16:24:38.341: INFO: stdout: "pod \"pause\" force deleted\n" Mar 8 16:24:38.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-w6mwd' Mar 8 16:24:38.433: INFO: stderr: "No resources found.\n" Mar 8 16:24:38.433: INFO: stdout: "" Mar 8 16:24:38.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-w6mwd -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 8 16:24:38.507: INFO: stderr: "" Mar 8 16:24:38.507: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Mar 8 16:24:38.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-w6mwd" for this suite. Mar 8 16:24:44.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 8 16:24:44.529: INFO: namespace: e2e-tests-kubectl-w6mwd, resource: bindings, ignored listing per whitelist Mar 8 16:24:44.582: INFO: namespace e2e-tests-kubectl-w6mwd deletion completed in 6.073287525s • [SLOW TEST:10.549 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSMar 8 16:24:44.582: INFO: Running AfterSuite actions on all nodes Mar 8 16:24:44.582: INFO: Running AfterSuite actions on node 1 Mar 8 16:24:44.582: INFO: Skipping dumping logs from cluster Ran 200 of 2164 Specs in 5967.484 seconds SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS