I0710 10:49:33.314573 6 e2e.go:224] Starting e2e run "09d24322-c29b-11ea-a406-0242ac11000f" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1594378172 - Will randomize all specs Will run 201 of 2164 specs Jul 10 10:49:33.481: INFO: >>> kubeConfig: /root/.kube/config Jul 10 10:49:33.485: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 10 10:49:33.498: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 10 10:49:33.543: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 10 10:49:33.543: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 10 10:49:33.543: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 10 10:49:33.683: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 10 10:49:33.683: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 10 10:49:33.683: INFO: e2e test version: v1.13.12 Jul 10 10:49:33.684: INFO: kube-apiserver version: v1.13.12 [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 10:49:33.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Jul 10 10:49:34.678: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 10 10:49:34.684: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ae623c6-c29b-11ea-a406-0242ac11000f" in namespace "e2e-tests-downward-api-sggq9" to be "success or failure" Jul 10 10:49:34.743: INFO: Pod "downwardapi-volume-0ae623c6-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 58.745049ms Jul 10 10:49:36.747: INFO: Pod "downwardapi-volume-0ae623c6-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062706178s Jul 10 10:49:38.750: INFO: Pod "downwardapi-volume-0ae623c6-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065654854s Jul 10 10:49:41.912: INFO: Pod "downwardapi-volume-0ae623c6-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.227839456s Jul 10 10:49:45.523: INFO: Pod "downwardapi-volume-0ae623c6-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.838925222s Jul 10 10:49:47.649: INFO: Pod "downwardapi-volume-0ae623c6-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.96482215s Jul 10 10:49:50.542: INFO: Pod "downwardapi-volume-0ae623c6-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.85826848s Jul 10 10:49:52.547: INFO: Pod "downwardapi-volume-0ae623c6-c29b-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.863099919s STEP: Saw pod success Jul 10 10:49:52.547: INFO: Pod "downwardapi-volume-0ae623c6-c29b-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 10:49:52.550: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-0ae623c6-c29b-11ea-a406-0242ac11000f container client-container: STEP: delete the pod Jul 10 10:49:53.077: INFO: Waiting for pod downwardapi-volume-0ae623c6-c29b-11ea-a406-0242ac11000f to disappear Jul 10 10:49:53.445: INFO: Pod downwardapi-volume-0ae623c6-c29b-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 10:49:53.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-sggq9" for this suite. Jul 10 10:50:01.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 10:50:02.183: INFO: namespace: e2e-tests-downward-api-sggq9, resource: bindings, ignored listing per whitelist Jul 10 10:50:02.298: INFO: namespace e2e-tests-downward-api-sggq9 deletion completed in 8.848936312s • [SLOW TEST:28.614 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 10:50:02.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Jul 10 10:50:02.433: INFO: Waiting up to 5m0s for pod "pod-1b704de3-c29b-11ea-a406-0242ac11000f" in namespace "e2e-tests-emptydir-htdrn" to be "success or failure" Jul 10 10:50:02.446: INFO: Pod "pod-1b704de3-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.762248ms Jul 10 10:50:05.116: INFO: Pod "pod-1b704de3-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.683032682s Jul 10 10:50:07.453: INFO: Pod "pod-1b704de3-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.019429006s Jul 10 10:50:09.456: INFO: Pod "pod-1b704de3-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.022339133s Jul 10 10:50:11.460: INFO: Pod "pod-1b704de3-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.026458979s Jul 10 10:50:13.464: INFO: Pod "pod-1b704de3-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.030197589s Jul 10 10:50:15.565: INFO: Pod "pod-1b704de3-c29b-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.131865813s STEP: Saw pod success Jul 10 10:50:15.565: INFO: Pod "pod-1b704de3-c29b-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 10:50:15.568: INFO: Trying to get logs from node hunter-worker pod pod-1b704de3-c29b-11ea-a406-0242ac11000f container test-container: STEP: delete the pod Jul 10 10:50:16.142: INFO: Waiting for pod pod-1b704de3-c29b-11ea-a406-0242ac11000f to disappear Jul 10 10:50:16.373: INFO: Pod pod-1b704de3-c29b-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 10:50:16.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-htdrn" for this suite. Jul 10 10:50:24.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 10:50:24.627: INFO: namespace: e2e-tests-emptydir-htdrn, resource: bindings, ignored listing per whitelist Jul 10 10:50:24.717: INFO: namespace e2e-tests-emptydir-htdrn deletion completed in 8.339405933s • [SLOW TEST:22.418 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 10:50:24.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 10 10:50:25.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-t8ttm' Jul 10 10:50:35.025: INFO: stderr: "" Jul 10 10:50:35.025: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jul 10 10:50:50.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-t8ttm -o json' Jul 10 10:50:50.156: INFO: stderr: "" Jul 10 10:50:50.156: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-07-10T10:50:35Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-t8ttm\",\n \"resourceVersion\": \"3922\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-t8ttm/pods/e2e-test-nginx-pod\",\n \"uid\": \"2edc2e81-c29b-11ea-b2c9-0242ac120008\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-tcjcz\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-tcjcz\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-tcjcz\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-10T10:50:35Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-10T10:50:48Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-10T10:50:48Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-10T10:50:35Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://0a226f15b9b4c719bcff0bac8bad8a50b5835a87c5615a5cc922afdb82f3d76c\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-07-10T10:50:48Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.2\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.7\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-07-10T10:50:35Z\"\n }\n}\n" STEP: replace the image in the pod Jul 10 10:50:50.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-t8ttm' Jul 10 10:50:50.461: INFO: stderr: "" Jul 10 10:50:50.461: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Jul 10 10:50:50.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-t8ttm' Jul 10 10:51:06.837: INFO: stderr: "" Jul 10 10:51:06.837: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 10:51:06.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-t8ttm" for this suite. Jul 10 10:51:19.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 10:51:19.771: INFO: namespace: e2e-tests-kubectl-t8ttm, resource: bindings, ignored listing per whitelist Jul 10 10:51:21.287: INFO: namespace e2e-tests-kubectl-t8ttm deletion completed in 13.851331199s • [SLOW TEST:56.569 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 10:51:21.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-4a88da23-c29b-11ea-a406-0242ac11000f STEP: Creating secret with name secret-projected-all-test-volume-4a88d9f6-c29b-11ea-a406-0242ac11000f STEP: Creating a pod to test Check all projections for projected volume plugin Jul 10 10:51:21.539: INFO: Waiting up to 5m0s for pod "projected-volume-4a88d9a5-c29b-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-srn2d" to be "success or failure" Jul 10 10:51:21.722: INFO: Pod "projected-volume-4a88d9a5-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 182.856581ms Jul 10 10:51:23.746: INFO: Pod "projected-volume-4a88d9a5-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20733288s Jul 10 10:51:25.818: INFO: Pod "projected-volume-4a88d9a5-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.279087137s Jul 10 10:51:27.939: INFO: Pod "projected-volume-4a88d9a5-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.400692811s Jul 10 10:51:29.944: INFO: Pod "projected-volume-4a88d9a5-c29b-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 8.405032857s Jul 10 10:51:31.947: INFO: Pod "projected-volume-4a88d9a5-c29b-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.408638496s STEP: Saw pod success Jul 10 10:51:31.947: INFO: Pod "projected-volume-4a88d9a5-c29b-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 10:51:31.949: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-4a88d9a5-c29b-11ea-a406-0242ac11000f container projected-all-volume-test: STEP: delete the pod Jul 10 10:51:33.179: INFO: Waiting for pod projected-volume-4a88d9a5-c29b-11ea-a406-0242ac11000f to disappear Jul 10 10:51:33.206: INFO: Pod projected-volume-4a88d9a5-c29b-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 10:51:33.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-srn2d" for this suite. Jul 10 10:51:41.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 10:51:41.283: INFO: namespace: e2e-tests-projected-srn2d, resource: bindings, ignored listing per whitelist Jul 10 10:51:41.343: INFO: namespace e2e-tests-projected-srn2d deletion completed in 8.134384907s • [SLOW TEST:20.056 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 10:51:41.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 10 10:51:41.628: INFO: Waiting up to 5m0s for pod "pod-568dc6b2-c29b-11ea-a406-0242ac11000f" in namespace "e2e-tests-emptydir-8lgb4" to be "success or failure" Jul 10 10:51:41.686: INFO: Pod "pod-568dc6b2-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 57.366097ms Jul 10 10:51:43.689: INFO: Pod "pod-568dc6b2-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060596767s Jul 10 10:51:45.693: INFO: Pod "pod-568dc6b2-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064306209s Jul 10 10:51:47.695: INFO: Pod "pod-568dc6b2-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067162383s Jul 10 10:51:50.184: INFO: Pod "pod-568dc6b2-c29b-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 8.555604574s Jul 10 10:51:53.154: INFO: Pod "pod-568dc6b2-c29b-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.525828827s STEP: Saw pod success Jul 10 10:51:53.154: INFO: Pod "pod-568dc6b2-c29b-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 10:51:53.158: INFO: Trying to get logs from node hunter-worker pod pod-568dc6b2-c29b-11ea-a406-0242ac11000f container test-container: STEP: delete the pod Jul 10 10:51:54.807: INFO: Waiting for pod pod-568dc6b2-c29b-11ea-a406-0242ac11000f to disappear Jul 10 10:51:55.039: INFO: Pod pod-568dc6b2-c29b-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 10:51:55.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8lgb4" for this suite. Jul 10 10:52:01.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 10:52:01.167: INFO: namespace: e2e-tests-emptydir-8lgb4, resource: bindings, ignored listing per whitelist Jul 10 10:52:01.190: INFO: namespace e2e-tests-emptydir-8lgb4 deletion completed in 6.14731861s • [SLOW TEST:19.846 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 10:52:01.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 10:53:04.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-wqjj4" for this suite. Jul 10 10:53:34.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 10:53:34.551: INFO: namespace: e2e-tests-container-probe-wqjj4, resource: bindings, ignored listing per whitelist Jul 10 10:53:34.569: INFO: namespace e2e-tests-container-probe-wqjj4 deletion completed in 30.333151102s • [SLOW TEST:93.378 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 10:53:34.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jul 10 10:53:34.818: INFO: Waiting up to 5m0s for pod "downward-api-9a070829-c29b-11ea-a406-0242ac11000f" in namespace "e2e-tests-downward-api-glfq6" to be "success or failure" Jul 10 10:53:34.887: INFO: Pod "downward-api-9a070829-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 69.338656ms Jul 10 10:53:36.929: INFO: Pod "downward-api-9a070829-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111540734s Jul 10 10:53:38.937: INFO: Pod "downward-api-9a070829-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118618641s Jul 10 10:53:40.940: INFO: Pod "downward-api-9a070829-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122041077s Jul 10 10:53:43.070: INFO: Pod "downward-api-9a070829-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.252463695s Jul 10 10:53:45.074: INFO: Pod "downward-api-9a070829-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.256350264s Jul 10 10:53:47.077: INFO: Pod "downward-api-9a070829-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.259364211s Jul 10 10:53:49.567: INFO: Pod "downward-api-9a070829-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.749392347s Jul 10 10:53:51.638: INFO: Pod "downward-api-9a070829-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.820429309s Jul 10 10:53:54.693: INFO: Pod "downward-api-9a070829-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.87504131s Jul 10 10:53:56.837: INFO: Pod "downward-api-9a070829-c29b-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.018620345s Jul 10 10:53:58.860: INFO: Pod "downward-api-9a070829-c29b-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.041920958s STEP: Saw pod success Jul 10 10:53:58.860: INFO: Pod "downward-api-9a070829-c29b-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 10:53:58.862: INFO: Trying to get logs from node hunter-worker pod downward-api-9a070829-c29b-11ea-a406-0242ac11000f container dapi-container: STEP: delete the pod Jul 10 10:53:58.946: INFO: Waiting for pod downward-api-9a070829-c29b-11ea-a406-0242ac11000f to disappear Jul 10 10:53:59.262: INFO: Pod downward-api-9a070829-c29b-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 10:53:59.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-glfq6" for this suite. Jul 10 10:54:05.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 10:54:05.737: INFO: namespace: e2e-tests-downward-api-glfq6, resource: bindings, ignored listing per whitelist Jul 10 10:54:05.741: INFO: namespace e2e-tests-downward-api-glfq6 deletion completed in 6.476957687s • [SLOW TEST:31.172 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 10:54:05.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-7wmhp [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating stateful set ss in namespace e2e-tests-statefulset-7wmhp STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-7wmhp Jul 10 10:54:06.014: INFO: Found 0 stateful pods, waiting for 1 Jul 10 10:54:16.018: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jul 10 10:54:16.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 10 10:54:16.397: INFO: stderr: "I0710 10:54:16.271473 133 log.go:172] (0xc000162840) (0xc000734640) Create stream\nI0710 10:54:16.271562 133 log.go:172] (0xc000162840) (0xc000734640) Stream added, broadcasting: 1\nI0710 10:54:16.275658 133 log.go:172] (0xc000162840) Reply frame received for 1\nI0710 10:54:16.275708 133 log.go:172] (0xc000162840) (0xc00078ed20) Create stream\nI0710 10:54:16.275725 133 log.go:172] (0xc000162840) (0xc00078ed20) Stream added, broadcasting: 3\nI0710 10:54:16.277126 133 log.go:172] (0xc000162840) Reply frame received for 3\nI0710 10:54:16.277177 133 log.go:172] (0xc000162840) (0xc00078c000) Create stream\nI0710 10:54:16.277191 133 log.go:172] (0xc000162840) (0xc00078c000) Stream added, broadcasting: 5\nI0710 10:54:16.277898 133 log.go:172] (0xc000162840) Reply frame received for 5\nI0710 10:54:16.393518 133 log.go:172] (0xc000162840) Data frame received for 3\nI0710 10:54:16.393548 133 log.go:172] (0xc00078ed20) (3) Data frame handling\nI0710 10:54:16.393564 133 log.go:172] (0xc00078ed20) (3) Data frame sent\nI0710 10:54:16.393648 133 log.go:172] (0xc000162840) Data frame received for 3\nI0710 10:54:16.393684 133 log.go:172] (0xc00078ed20) (3) Data frame handling\nI0710 10:54:16.393709 133 log.go:172] (0xc000162840) Data frame received for 5\nI0710 10:54:16.393720 133 log.go:172] (0xc00078c000) (5) Data frame handling\nI0710 10:54:16.395065 133 log.go:172] (0xc000162840) Data frame received for 1\nI0710 10:54:16.395079 133 log.go:172] (0xc000734640) (1) Data frame handling\nI0710 10:54:16.395093 133 log.go:172] (0xc000734640) (1) Data frame sent\nI0710 10:54:16.395103 133 log.go:172] (0xc000162840) (0xc000734640) Stream removed, broadcasting: 1\nI0710 10:54:16.395140 133 log.go:172] (0xc000162840) Go away received\nI0710 10:54:16.395248 133 log.go:172] (0xc000162840) (0xc000734640) Stream removed, broadcasting: 1\nI0710 10:54:16.395258 133 log.go:172] (0xc000162840) (0xc00078ed20) Stream removed, broadcasting: 3\nI0710 10:54:16.395268 133 log.go:172] (0xc000162840) (0xc00078c000) Stream removed, broadcasting: 5\n" Jul 10 10:54:16.397: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 10 10:54:16.397: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 10 10:54:16.400: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jul 10 10:54:26.430: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 10 10:54:26.430: INFO: Waiting for statefulset status.replicas updated to 0 Jul 10 10:54:26.495: INFO: POD NODE PHASE GRACE CONDITIONS Jul 10 10:54:26.495: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:06 +0000 UTC }] Jul 10 10:54:26.495: INFO: Jul 10 10:54:26.495: INFO: StatefulSet ss has not reached scale 3, at 1 Jul 10 10:54:27.619: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.94251708s Jul 10 10:54:28.639: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.818894837s Jul 10 10:54:29.969: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.798966256s Jul 10 10:54:30.974: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.468112173s Jul 10 10:54:32.083: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.463604174s Jul 10 10:54:33.087: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.35428637s Jul 10 10:54:34.092: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.35023718s Jul 10 10:54:35.311: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.345585111s Jul 10 10:54:36.315: INFO: Verifying statefulset ss doesn't scale past 3 for another 127.067274ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-7wmhp Jul 10 10:54:37.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:54:39.404: INFO: stderr: "I0710 10:54:39.334996 156 log.go:172] (0xc0008162c0) (0xc0006f0640) Create stream\nI0710 10:54:39.335064 156 log.go:172] (0xc0008162c0) (0xc0006f0640) Stream added, broadcasting: 1\nI0710 10:54:39.337095 156 log.go:172] (0xc0008162c0) Reply frame received for 1\nI0710 10:54:39.337127 156 log.go:172] (0xc0008162c0) (0xc000790f00) Create stream\nI0710 10:54:39.337135 156 log.go:172] (0xc0008162c0) (0xc000790f00) Stream added, broadcasting: 3\nI0710 10:54:39.337908 156 log.go:172] (0xc0008162c0) Reply frame received for 3\nI0710 10:54:39.337957 156 log.go:172] (0xc0008162c0) (0xc0006f06e0) Create stream\nI0710 10:54:39.337983 156 log.go:172] (0xc0008162c0) (0xc0006f06e0) Stream added, broadcasting: 5\nI0710 10:54:39.338691 156 log.go:172] (0xc0008162c0) Reply frame received for 5\nI0710 10:54:39.400131 156 log.go:172] (0xc0008162c0) Data frame received for 3\nI0710 10:54:39.400160 156 log.go:172] (0xc000790f00) (3) Data frame handling\nI0710 10:54:39.400176 156 log.go:172] (0xc000790f00) (3) Data frame sent\nI0710 10:54:39.400233 156 log.go:172] (0xc0008162c0) Data frame received for 5\nI0710 10:54:39.400245 156 log.go:172] (0xc0006f06e0) (5) Data frame handling\nI0710 10:54:39.400651 156 log.go:172] (0xc0008162c0) Data frame received for 3\nI0710 10:54:39.400671 156 log.go:172] (0xc000790f00) (3) Data frame handling\nI0710 10:54:39.402419 156 log.go:172] (0xc0008162c0) Data frame received for 1\nI0710 10:54:39.402433 156 log.go:172] (0xc0006f0640) (1) Data frame handling\nI0710 10:54:39.402443 156 log.go:172] (0xc0006f0640) (1) Data frame sent\nI0710 10:54:39.402458 156 log.go:172] (0xc0008162c0) (0xc0006f0640) Stream removed, broadcasting: 1\nI0710 10:54:39.402467 156 log.go:172] (0xc0008162c0) Go away received\nI0710 10:54:39.402641 156 log.go:172] (0xc0008162c0) (0xc0006f0640) Stream removed, broadcasting: 1\nI0710 10:54:39.402654 156 log.go:172] (0xc0008162c0) (0xc000790f00) Stream removed, broadcasting: 3\nI0710 10:54:39.402664 156 log.go:172] (0xc0008162c0) (0xc0006f06e0) Stream removed, broadcasting: 5\n" Jul 10 10:54:39.404: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 10 10:54:39.404: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 10 10:54:39.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:54:40.649: INFO: rc: 1 Jul 10 10:54:40.649: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001ee3560 exit status 1 true [0xc0013a0438 0xc0013a0478 0xc0013a0490] [0xc0013a0438 0xc0013a0478 0xc0013a0490] [0xc0013a0470 0xc0013a0488] [0x935700 0x935700] 0xc001882120 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jul 10 10:54:50.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:54:50.772: INFO: rc: 1 Jul 10 10:54:50.773: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc000c052f0 exit status 1 true [0xc00176c638 0xc00176c650 0xc00176c668] [0xc00176c638 0xc00176c650 0xc00176c668] [0xc00176c648 0xc00176c660] [0x935700 0x935700] 0xc001943ec0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jul 10 10:55:00.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:55:01.412: INFO: stderr: "I0710 10:55:01.339770 223 log.go:172] (0xc00013c580) (0xc000716000) Create stream\nI0710 10:55:01.339821 223 log.go:172] (0xc00013c580) (0xc000716000) Stream added, broadcasting: 1\nI0710 10:55:01.342927 223 log.go:172] (0xc00013c580) Reply frame received for 1\nI0710 10:55:01.342968 223 log.go:172] (0xc00013c580) (0xc00001e000) Create stream\nI0710 10:55:01.342977 223 log.go:172] (0xc00013c580) (0xc00001e000) Stream added, broadcasting: 3\nI0710 10:55:01.343880 223 log.go:172] (0xc00013c580) Reply frame received for 3\nI0710 10:55:01.343920 223 log.go:172] (0xc00013c580) (0xc00036a000) Create stream\nI0710 10:55:01.343937 223 log.go:172] (0xc00013c580) (0xc00036a000) Stream added, broadcasting: 5\nI0710 10:55:01.344957 223 log.go:172] (0xc00013c580) Reply frame received for 5\nI0710 10:55:01.408558 223 log.go:172] (0xc00013c580) Data frame received for 5\nI0710 10:55:01.408595 223 log.go:172] (0xc00036a000) (5) Data frame handling\nI0710 10:55:01.408609 223 log.go:172] (0xc00036a000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0710 10:55:01.408626 223 log.go:172] (0xc00013c580) Data frame received for 3\nI0710 10:55:01.408636 223 log.go:172] (0xc00001e000) (3) Data frame handling\nI0710 10:55:01.408646 223 log.go:172] (0xc00001e000) (3) Data frame sent\nI0710 10:55:01.408670 223 log.go:172] (0xc00013c580) Data frame received for 3\nI0710 10:55:01.408679 223 log.go:172] (0xc00001e000) (3) Data frame handling\nI0710 10:55:01.408981 223 log.go:172] (0xc00013c580) Data frame received for 5\nI0710 10:55:01.408998 223 log.go:172] (0xc00036a000) (5) Data frame handling\nI0710 10:55:01.410512 223 log.go:172] (0xc00013c580) Data frame received for 1\nI0710 10:55:01.410532 223 log.go:172] (0xc000716000) (1) Data frame handling\nI0710 10:55:01.410548 223 log.go:172] (0xc000716000) (1) Data frame sent\nI0710 10:55:01.410565 223 log.go:172] (0xc00013c580) (0xc000716000) Stream removed, broadcasting: 1\nI0710 10:55:01.410580 223 log.go:172] (0xc00013c580) Go away received\nI0710 10:55:01.410753 223 log.go:172] (0xc00013c580) (0xc000716000) Stream removed, broadcasting: 1\nI0710 10:55:01.410772 223 log.go:172] (0xc00013c580) (0xc00001e000) Stream removed, broadcasting: 3\nI0710 10:55:01.410782 223 log.go:172] (0xc00013c580) (0xc00036a000) Stream removed, broadcasting: 5\n" Jul 10 10:55:01.412: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 10 10:55:01.412: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 10 10:55:01.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:55:02.499: INFO: stderr: "I0710 10:55:02.421280 245 log.go:172] (0xc0001386e0) (0xc0006e7400) Create stream\nI0710 10:55:02.421344 245 log.go:172] (0xc0001386e0) (0xc0006e7400) Stream added, broadcasting: 1\nI0710 10:55:02.423496 245 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0710 10:55:02.423541 245 log.go:172] (0xc0001386e0) (0xc000740000) Create stream\nI0710 10:55:02.423551 245 log.go:172] (0xc0001386e0) (0xc000740000) Stream added, broadcasting: 3\nI0710 10:55:02.424423 245 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0710 10:55:02.424473 245 log.go:172] (0xc0001386e0) (0xc00034c000) Create stream\nI0710 10:55:02.424494 245 log.go:172] (0xc0001386e0) (0xc00034c000) Stream added, broadcasting: 5\nI0710 10:55:02.425391 245 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0710 10:55:02.491820 245 log.go:172] (0xc0001386e0) Data frame received for 3\nI0710 10:55:02.491863 245 log.go:172] (0xc000740000) (3) Data frame handling\nI0710 10:55:02.491876 245 log.go:172] (0xc000740000) (3) Data frame sent\nI0710 10:55:02.491881 245 log.go:172] (0xc0001386e0) Data frame received for 3\nI0710 10:55:02.491886 245 log.go:172] (0xc000740000) (3) Data frame handling\nI0710 10:55:02.491921 245 log.go:172] (0xc0001386e0) Data frame received for 5\nI0710 10:55:02.491930 245 log.go:172] (0xc00034c000) (5) Data frame handling\nI0710 10:55:02.491940 245 log.go:172] (0xc00034c000) (5) Data frame sent\nI0710 10:55:02.491947 245 log.go:172] (0xc0001386e0) Data frame received for 5\nI0710 10:55:02.491951 245 log.go:172] (0xc00034c000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0710 10:55:02.496333 245 log.go:172] (0xc0001386e0) Data frame received for 1\nI0710 10:55:02.496378 245 log.go:172] (0xc0006e7400) (1) Data frame handling\nI0710 10:55:02.496401 245 log.go:172] (0xc0006e7400) (1) Data frame sent\nI0710 10:55:02.496425 245 log.go:172] (0xc0001386e0) (0xc0006e7400) Stream removed, broadcasting: 1\nI0710 10:55:02.496468 245 log.go:172] (0xc0001386e0) Go away received\nI0710 10:55:02.496828 245 log.go:172] (0xc0001386e0) (0xc0006e7400) Stream removed, broadcasting: 1\nI0710 10:55:02.496856 245 log.go:172] (0xc0001386e0) (0xc000740000) Stream removed, broadcasting: 3\nI0710 10:55:02.496866 245 log.go:172] (0xc0001386e0) (0xc00034c000) Stream removed, broadcasting: 5\n" Jul 10 10:55:02.500: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 10 10:55:02.500: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 10 10:55:02.819: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jul 10 10:55:02.819: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jul 10 10:55:02.819: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jul 10 10:55:02.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 10 10:55:03.405: INFO: stderr: "I0710 10:55:03.331167 268 log.go:172] (0xc0008682c0) (0xc000732640) Create stream\nI0710 10:55:03.331220 268 log.go:172] (0xc0008682c0) (0xc000732640) Stream added, broadcasting: 1\nI0710 10:55:03.333593 268 log.go:172] (0xc0008682c0) Reply frame received for 1\nI0710 10:55:03.333635 268 log.go:172] (0xc0008682c0) (0xc000604dc0) Create stream\nI0710 10:55:03.333664 268 log.go:172] (0xc0008682c0) (0xc000604dc0) Stream added, broadcasting: 3\nI0710 10:55:03.334532 268 log.go:172] (0xc0008682c0) Reply frame received for 3\nI0710 10:55:03.334568 268 log.go:172] (0xc0008682c0) (0xc000604f00) Create stream\nI0710 10:55:03.334577 268 log.go:172] (0xc0008682c0) (0xc000604f00) Stream added, broadcasting: 5\nI0710 10:55:03.335459 268 log.go:172] (0xc0008682c0) Reply frame received for 5\nI0710 10:55:03.399530 268 log.go:172] (0xc0008682c0) Data frame received for 3\nI0710 10:55:03.399569 268 log.go:172] (0xc000604dc0) (3) Data frame handling\nI0710 10:55:03.399595 268 log.go:172] (0xc000604dc0) (3) Data frame sent\nI0710 10:55:03.399628 268 log.go:172] (0xc0008682c0) Data frame received for 3\nI0710 10:55:03.399643 268 log.go:172] (0xc000604dc0) (3) Data frame handling\nI0710 10:55:03.399767 268 log.go:172] (0xc0008682c0) Data frame received for 5\nI0710 10:55:03.399784 268 log.go:172] (0xc000604f00) (5) Data frame handling\nI0710 10:55:03.402256 268 log.go:172] (0xc0008682c0) Data frame received for 1\nI0710 10:55:03.402271 268 log.go:172] (0xc000732640) (1) Data frame handling\nI0710 10:55:03.402289 268 log.go:172] (0xc000732640) (1) Data frame sent\nI0710 10:55:03.402344 268 log.go:172] (0xc0008682c0) (0xc000732640) Stream removed, broadcasting: 1\nI0710 10:55:03.402529 268 log.go:172] (0xc0008682c0) (0xc000732640) Stream removed, broadcasting: 1\nI0710 10:55:03.402564 268 log.go:172] (0xc0008682c0) Go away received\nI0710 10:55:03.402603 268 log.go:172] (0xc0008682c0) (0xc000604dc0) Stream removed, broadcasting: 3\nI0710 10:55:03.402627 268 log.go:172] (0xc0008682c0) (0xc000604f00) Stream removed, broadcasting: 5\n" Jul 10 10:55:03.405: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 10 10:55:03.405: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 10 10:55:03.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 10 10:55:03.789: INFO: stderr: "I0710 10:55:03.524082 290 log.go:172] (0xc00071a420) (0xc000686640) Create stream\nI0710 10:55:03.524123 290 log.go:172] (0xc00071a420) (0xc000686640) Stream added, broadcasting: 1\nI0710 10:55:03.526391 290 log.go:172] (0xc00071a420) Reply frame received for 1\nI0710 10:55:03.526413 290 log.go:172] (0xc00071a420) (0xc0006866e0) Create stream\nI0710 10:55:03.526419 290 log.go:172] (0xc00071a420) (0xc0006866e0) Stream added, broadcasting: 3\nI0710 10:55:03.527103 290 log.go:172] (0xc00071a420) Reply frame received for 3\nI0710 10:55:03.527137 290 log.go:172] (0xc00071a420) (0xc0003a4c80) Create stream\nI0710 10:55:03.527153 290 log.go:172] (0xc00071a420) (0xc0003a4c80) Stream added, broadcasting: 5\nI0710 10:55:03.527821 290 log.go:172] (0xc00071a420) Reply frame received for 5\nI0710 10:55:03.784320 290 log.go:172] (0xc00071a420) Data frame received for 3\nI0710 10:55:03.784354 290 log.go:172] (0xc0006866e0) (3) Data frame handling\nI0710 10:55:03.784380 290 log.go:172] (0xc0006866e0) (3) Data frame sent\nI0710 10:55:03.784471 290 log.go:172] (0xc00071a420) Data frame received for 5\nI0710 10:55:03.784505 290 log.go:172] (0xc0003a4c80) (5) Data frame handling\nI0710 10:55:03.784563 290 log.go:172] (0xc00071a420) Data frame received for 3\nI0710 10:55:03.784577 290 log.go:172] (0xc0006866e0) (3) Data frame handling\nI0710 10:55:03.786447 290 log.go:172] (0xc00071a420) Data frame received for 1\nI0710 10:55:03.786466 290 log.go:172] (0xc000686640) (1) Data frame handling\nI0710 10:55:03.786488 290 log.go:172] (0xc000686640) (1) Data frame sent\nI0710 10:55:03.786507 290 log.go:172] (0xc00071a420) (0xc000686640) Stream removed, broadcasting: 1\nI0710 10:55:03.786653 290 log.go:172] (0xc00071a420) Go away received\nI0710 10:55:03.786749 290 log.go:172] (0xc00071a420) (0xc000686640) Stream removed, broadcasting: 1\nI0710 10:55:03.786767 290 log.go:172] (0xc00071a420) (0xc0006866e0) Stream removed, broadcasting: 3\nI0710 10:55:03.786776 290 log.go:172] (0xc00071a420) (0xc0003a4c80) Stream removed, broadcasting: 5\n" Jul 10 10:55:03.789: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 10 10:55:03.789: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 10 10:55:03.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 10 10:55:04.507: INFO: stderr: "I0710 10:55:04.290771 313 log.go:172] (0xc00072a2c0) (0xc000692640) Create stream\nI0710 10:55:04.290840 313 log.go:172] (0xc00072a2c0) (0xc000692640) Stream added, broadcasting: 1\nI0710 10:55:04.293034 313 log.go:172] (0xc00072a2c0) Reply frame received for 1\nI0710 10:55:04.293075 313 log.go:172] (0xc00072a2c0) (0xc00001ec80) Create stream\nI0710 10:55:04.293085 313 log.go:172] (0xc00072a2c0) (0xc00001ec80) Stream added, broadcasting: 3\nI0710 10:55:04.293960 313 log.go:172] (0xc00072a2c0) Reply frame received for 3\nI0710 10:55:04.293990 313 log.go:172] (0xc00072a2c0) (0xc00001edc0) Create stream\nI0710 10:55:04.294009 313 log.go:172] (0xc00072a2c0) (0xc00001edc0) Stream added, broadcasting: 5\nI0710 10:55:04.294739 313 log.go:172] (0xc00072a2c0) Reply frame received for 5\nI0710 10:55:04.502399 313 log.go:172] (0xc00072a2c0) Data frame received for 3\nI0710 10:55:04.502428 313 log.go:172] (0xc00001ec80) (3) Data frame handling\nI0710 10:55:04.502446 313 log.go:172] (0xc00001ec80) (3) Data frame sent\nI0710 10:55:04.502585 313 log.go:172] (0xc00072a2c0) Data frame received for 3\nI0710 10:55:04.502602 313 log.go:172] (0xc00001ec80) (3) Data frame handling\nI0710 10:55:04.502795 313 log.go:172] (0xc00072a2c0) Data frame received for 5\nI0710 10:55:04.502811 313 log.go:172] (0xc00001edc0) (5) Data frame handling\nI0710 10:55:04.504593 313 log.go:172] (0xc00072a2c0) Data frame received for 1\nI0710 10:55:04.504613 313 log.go:172] (0xc000692640) (1) Data frame handling\nI0710 10:55:04.504638 313 log.go:172] (0xc000692640) (1) Data frame sent\nI0710 10:55:04.504659 313 log.go:172] (0xc00072a2c0) (0xc000692640) Stream removed, broadcasting: 1\nI0710 10:55:04.504867 313 log.go:172] (0xc00072a2c0) Go away received\nI0710 10:55:04.504966 313 log.go:172] (0xc00072a2c0) (0xc000692640) Stream removed, broadcasting: 1\nI0710 10:55:04.504979 313 log.go:172] (0xc00072a2c0) (0xc00001ec80) Stream removed, broadcasting: 3\nI0710 10:55:04.504984 313 log.go:172] (0xc00072a2c0) (0xc00001edc0) Stream removed, broadcasting: 5\n" Jul 10 10:55:04.507: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 10 10:55:04.507: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 10 10:55:04.507: INFO: Waiting for statefulset status.replicas updated to 0 Jul 10 10:55:05.334: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jul 10 10:55:15.988: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jul 10 10:55:15.988: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jul 10 10:55:15.988: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jul 10 10:55:16.843: INFO: POD NODE PHASE GRACE CONDITIONS Jul 10 10:55:16.843: INFO: ss-0 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:06 +0000 UTC }] Jul 10 10:55:16.843: INFO: ss-1 hunter-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:26 +0000 UTC }] Jul 10 10:55:16.843: INFO: ss-2 hunter-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:26 +0000 UTC }] Jul 10 10:55:16.843: INFO: Jul 10 10:55:16.843: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 10 10:55:18.616: INFO: POD NODE PHASE GRACE CONDITIONS Jul 10 10:55:18.617: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:06 +0000 UTC }] Jul 10 10:55:18.617: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:26 +0000 UTC }] Jul 10 10:55:18.617: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:26 +0000 UTC }] Jul 10 10:55:18.617: INFO: Jul 10 10:55:18.617: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 10 10:55:19.799: INFO: POD NODE PHASE GRACE CONDITIONS Jul 10 10:55:19.799: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:06 +0000 UTC }] Jul 10 10:55:19.799: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:26 +0000 UTC }] Jul 10 10:55:19.799: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:26 +0000 UTC }] Jul 10 10:55:19.799: INFO: Jul 10 10:55:19.799: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 10 10:55:20.970: INFO: POD NODE PHASE GRACE CONDITIONS Jul 10 10:55:20.970: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:06 +0000 UTC }] Jul 10 10:55:20.970: INFO: ss-1 hunter-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:26 +0000 UTC }] Jul 10 10:55:20.970: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:26 +0000 UTC }] Jul 10 10:55:20.970: INFO: Jul 10 10:55:20.970: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 10 10:55:22.041: INFO: POD NODE PHASE GRACE CONDITIONS Jul 10 10:55:22.041: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:06 +0000 UTC }] Jul 10 10:55:22.041: INFO: ss-1 hunter-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:03 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:26 +0000 UTC }] Jul 10 10:55:22.041: INFO: ss-2 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:26 +0000 UTC }] Jul 10 10:55:22.041: INFO: Jul 10 10:55:22.041: INFO: StatefulSet ss has not reached scale 0, at 3 Jul 10 10:55:23.044: INFO: POD NODE PHASE GRACE CONDITIONS Jul 10 10:55:23.044: INFO: ss-0 hunter-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:06 +0000 UTC }] Jul 10 10:55:23.044: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:26 +0000 UTC }] Jul 10 10:55:23.044: INFO: Jul 10 10:55:23.044: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 10 10:55:24.180: INFO: POD NODE PHASE GRACE CONDITIONS Jul 10 10:55:24.180: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:06 +0000 UTC }] Jul 10 10:55:24.181: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:26 +0000 UTC }] Jul 10 10:55:24.181: INFO: Jul 10 10:55:24.181: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 10 10:55:25.184: INFO: POD NODE PHASE GRACE CONDITIONS Jul 10 10:55:25.184: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:06 +0000 UTC }] Jul 10 10:55:25.184: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:26 +0000 UTC }] Jul 10 10:55:25.184: INFO: Jul 10 10:55:25.184: INFO: StatefulSet ss has not reached scale 0, at 2 Jul 10 10:55:26.233: INFO: POD NODE PHASE GRACE CONDITIONS Jul 10 10:55:26.233: INFO: ss-0 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:06 +0000 UTC }] Jul 10 10:55:26.233: INFO: ss-2 hunter-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:55:04 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 10:54:26 +0000 UTC }] Jul 10 10:55:26.233: INFO: Jul 10 10:55:26.233: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-7wmhp Jul 10 10:55:27.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:55:27.743: INFO: rc: 1 Jul 10 10:55:27.743: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0017b6a80 exit status 1 true [0xc0012563f0 0xc001256408 0xc001256420] [0xc0012563f0 0xc001256408 0xc001256420] [0xc001256400 0xc001256418] [0x935700 0x935700] 0xc0016e3440 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jul 10 10:55:37.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:55:37.836: INFO: rc: 1 Jul 10 10:55:37.836: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c48990 exit status 1 true [0xc0013a05b0 0xc0013a05c8 0xc0013a05e0] [0xc0013a05b0 0xc0013a05c8 0xc0013a05e0] [0xc0013a05c0 0xc0013a05d8] [0x935700 0x935700] 0xc001883320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:55:47.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:55:48.352: INFO: rc: 1 Jul 10 10:55:48.352: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0011f1380 exit status 1 true [0xc00114c748 0xc00114c760 0xc00114c778] [0xc00114c748 0xc00114c760 0xc00114c778] [0xc00114c758 0xc00114c770] [0x935700 0x935700] 0xc001b9c840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:55:58.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:55:58.538: INFO: rc: 1 Jul 10 10:55:58.538: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000c48b10 exit status 1 true [0xc0013a05e8 0xc0013a0600 0xc0013a0618] [0xc0013a05e8 0xc0013a0600 0xc0013a0618] [0xc0013a05f8 0xc0013a0610] [0x935700 0x935700] 0xc0018835c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:56:08.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:56:08.879: INFO: rc: 1 Jul 10 10:56:08.879: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001312120 exit status 1 true [0xc00114c020 0xc00114c038 0xc00114c050] [0xc00114c020 0xc00114c038 0xc00114c050] [0xc00114c030 0xc00114c048] [0x935700 0x935700] 0xc0019121e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:56:18.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:56:19.207: INFO: rc: 1 Jul 10 10:56:19.207: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d34120 exit status 1 true [0xc0013a0010 0xc0013a0038 0xc0013a0050] [0xc0013a0010 0xc0013a0038 0xc0013a0050] [0xc0013a0030 0xc0013a0048] [0x935700 0x935700] 0xc00185a300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:56:29.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:56:29.782: INFO: rc: 1 Jul 10 10:56:29.782: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0002b5710 exit status 1 true [0xc001256008 0xc001256060 0xc001256078] [0xc001256008 0xc001256060 0xc001256078] [0xc001256048 0xc001256070] [0x935700 0x935700] 0xc001942a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:56:39.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:56:39.915: INFO: rc: 1 Jul 10 10:56:39.915: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017b2c90 exit status 1 true [0xc001902000 0xc001902018 0xc001902030] [0xc001902000 0xc001902018 0xc001902030] [0xc001902010 0xc001902028] [0x935700 0x935700] 0xc00174a000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:56:49.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:56:49.998: INFO: rc: 1 Jul 10 10:56:49.998: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001312300 exit status 1 true [0xc00114c068 0xc00114c0a8 0xc00114c0c0] [0xc00114c068 0xc00114c0a8 0xc00114c0c0] [0xc00114c090 0xc00114c0b8] [0x935700 0x935700] 0xc001912480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:56:59.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:57:00.466: INFO: rc: 1 Jul 10 10:57:00.466: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d34240 exit status 1 true [0xc0013a0058 0xc0013a0078 0xc0013a00b0] [0xc0013a0058 0xc0013a0078 0xc0013a00b0] [0xc0013a0068 0xc0013a0098] [0x935700 0x935700] 0xc00185a5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:57:10.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:57:10.963: INFO: rc: 1 Jul 10 10:57:10.963: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d34420 exit status 1 true [0xc0013a00d0 0xc0013a0110 0xc0013a0128] [0xc0013a00d0 0xc0013a0110 0xc0013a0128] [0xc0013a0108 0xc0013a0120] [0x935700 0x935700] 0xc00185a840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:57:20.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:57:21.046: INFO: rc: 1 Jul 10 10:57:21.046: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0002b5860 exit status 1 true [0xc001256080 0xc001256098 0xc0012560b8] [0xc001256080 0xc001256098 0xc0012560b8] [0xc001256090 0xc0012560b0] [0x935700 0x935700] 0xc001942d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:57:31.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:57:31.140: INFO: rc: 1 Jul 10 10:57:31.141: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0002b5980 exit status 1 true [0xc0012560c0 0xc0012560d8 0xc001256108] [0xc0012560c0 0xc0012560d8 0xc001256108] [0xc0012560d0 0xc001256100] [0x935700 0x935700] 0xc001942fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:57:41.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:57:41.218: INFO: rc: 1 Jul 10 10:57:41.218: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d346c0 exit status 1 true [0xc0013a0130 0xc0013a0148 0xc0013a0168] [0xc0013a0130 0xc0013a0148 0xc0013a0168] [0xc0013a0140 0xc0013a0160] [0x935700 0x935700] 0xc00185ab40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:57:51.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:57:51.590: INFO: rc: 1 Jul 10 10:57:51.590: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001312420 exit status 1 true [0xc00114c0c8 0xc00114c0e0 0xc00114c0f8] [0xc00114c0c8 0xc00114c0e0 0xc00114c0f8] [0xc00114c0d8 0xc00114c0f0] [0x935700 0x935700] 0xc001912720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:58:01.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:58:01.766: INFO: rc: 1 Jul 10 10:58:01.766: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d34810 exit status 1 true [0xc0013a0170 0xc0013a0188 0xc0013a01a0] [0xc0013a0170 0xc0013a0188 0xc0013a01a0] [0xc0013a0180 0xc0013a0198] [0x935700 0x935700] 0xc00185ade0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:58:11.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:58:11.992: INFO: rc: 1 Jul 10 10:58:11.992: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d340f0 exit status 1 true [0xc0013a0028 0xc0013a0040 0xc0013a0058] [0xc0013a0028 0xc0013a0040 0xc0013a0058] [0xc0013a0038 0xc0013a0050] [0x935700 0x935700] 0xc00185a000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:58:21.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:58:22.174: INFO: rc: 1 Jul 10 10:58:22.174: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017b2cc0 exit status 1 true [0xc001902000 0xc001902018 0xc001902030] [0xc001902000 0xc001902018 0xc001902030] [0xc001902010 0xc001902028] [0x935700 0x935700] 0xc00174a420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:58:32.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:58:32.255: INFO: rc: 1 Jul 10 10:58:32.255: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000d34360 exit status 1 true [0xc0013a0060 0xc0013a0090 0xc0013a00d0] [0xc0013a0060 0xc0013a0090 0xc0013a00d0] [0xc0013a0078 0xc0013a00b0] [0x935700 0x935700] 0xc00185a3c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:58:42.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:58:42.347: INFO: rc: 1 Jul 10 10:58:42.347: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017b2de0 exit status 1 true [0xc001902038 0xc001902050 0xc001902068] [0xc001902038 0xc001902050 0xc001902068] [0xc001902048 0xc001902060] [0x935700 0x935700] 0xc00174ab40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:58:52.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:58:52.468: INFO: rc: 1 Jul 10 10:58:52.468: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017b2f00 exit status 1 true [0xc001902070 0xc001902088 0xc0019020a0] [0xc001902070 0xc001902088 0xc0019020a0] [0xc001902080 0xc001902098] [0x935700 0x935700] 0xc00174b2c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:59:02.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:59:02.553: INFO: rc: 1 Jul 10 10:59:02.553: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017b3050 exit status 1 true [0xc0019020a8 0xc0019020c0 0xc0019020d8] [0xc0019020a8 0xc0019020c0 0xc0019020d8] [0xc0019020b8 0xc0019020d0] [0x935700 0x935700] 0xc00174b740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:59:12.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:59:12.631: INFO: rc: 1 Jul 10 10:59:12.631: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001312150 exit status 1 true [0xc001256008 0xc001256060 0xc001256078] [0xc001256008 0xc001256060 0xc001256078] [0xc001256048 0xc001256070] [0x935700 0x935700] 0xc001942a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:59:22.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:59:22.718: INFO: rc: 1 Jul 10 10:59:22.718: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017b31a0 exit status 1 true [0xc0019020e0 0xc0019020f8 0xc001902110] [0xc0019020e0 0xc0019020f8 0xc001902110] [0xc0019020f0 0xc001902108] [0x935700 0x935700] 0xc00174bd40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:59:32.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:59:32.806: INFO: rc: 1 Jul 10 10:59:32.806: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017b32f0 exit status 1 true [0xc001902118 0xc001902130 0xc001902148] [0xc001902118 0xc001902130 0xc001902148] [0xc001902128 0xc001902140] [0x935700 0x935700] 0xc0019120c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:59:42.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:59:42.896: INFO: rc: 1 Jul 10 10:59:42.896: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001312330 exit status 1 true [0xc001256080 0xc001256098 0xc0012560b8] [0xc001256080 0xc001256098 0xc0012560b8] [0xc001256090 0xc0012560b0] [0x935700 0x935700] 0xc001942d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 10:59:52.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 10:59:53.083: INFO: rc: 1 Jul 10 10:59:53.083: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001312480 exit status 1 true [0xc0012560c0 0xc0012560d8 0xc001256108] [0xc0012560c0 0xc0012560d8 0xc001256108] [0xc0012560d0 0xc001256100] [0x935700 0x935700] 0xc001942fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 11:00:03.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 11:00:03.160: INFO: rc: 1 Jul 10 11:00:03.160: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0002b57a0 exit status 1 true [0xc00114c000 0xc00114c030 0xc00114c048] [0xc00114c000 0xc00114c030 0xc00114c048] [0xc00114c028 0xc00114c040] [0x935700 0x935700] 0xc0014881e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 11:00:13.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 11:00:13.287: INFO: rc: 1 Jul 10 11:00:13.288: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017b2c60 exit status 1 true [0xc001902008 0xc001902020 0xc001902038] [0xc001902008 0xc001902020 0xc001902038] [0xc001902018 0xc001902030] [0x935700 0x935700] 0xc00174a000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 11:00:23.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 11:00:23.425: INFO: rc: 1 Jul 10 11:00:23.425: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0017b2e10 exit status 1 true [0xc001902040 0xc001902058 0xc001902070] [0xc001902040 0xc001902058 0xc001902070] [0xc001902050 0xc001902068] [0x935700 0x935700] 0xc00174a600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jul 10 11:00:33.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-7wmhp ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 10 11:00:33.593: INFO: rc: 1 Jul 10 11:00:33.594: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Jul 10 11:00:33.594: INFO: Scaling statefulset ss to 0 Jul 10 11:00:33.602: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jul 10 11:00:33.604: INFO: Deleting all statefulset in ns e2e-tests-statefulset-7wmhp Jul 10 11:00:33.606: INFO: Scaling statefulset ss to 0 Jul 10 11:00:33.612: INFO: Waiting for statefulset status.replicas updated to 0 Jul 10 11:00:33.614: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:00:33.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-7wmhp" for this suite. Jul 10 11:00:44.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:00:44.153: INFO: namespace: e2e-tests-statefulset-7wmhp, resource: bindings, ignored listing per whitelist Jul 10 11:00:44.350: INFO: namespace e2e-tests-statefulset-7wmhp deletion completed in 10.695466386s • [SLOW TEST:398.609 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:00:44.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 10 11:00:45.011: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jul 10 11:00:45.070: INFO: Pod name sample-pod: Found 0 pods out of 1 Jul 10 11:00:50.231: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 10 11:00:54.246: INFO: Creating deployment "test-rolling-update-deployment" Jul 10 11:00:54.254: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jul 10 11:00:54.602: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jul 10 11:00:56.800: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jul 10 11:00:56.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729975654, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729975654, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729975655, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729975654, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 10 11:00:58.914: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729975654, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729975654, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729975655, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729975654, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 10 11:01:01.303: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jul 10 11:01:01.498: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-6twk6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6twk6/deployments/test-rolling-update-deployment,UID:9ff4020c-c29c-11ea-b2c9-0242ac120008,ResourceVersion:6137,Generation:1,CreationTimestamp:2020-07-10 11:00:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-07-10 11:00:54 +0000 UTC 2020-07-10 11:00:54 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-07-10 11:00:59 +0000 UTC 2020-07-10 11:00:54 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jul 10 11:01:01.500: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-6twk6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6twk6/replicasets/test-rolling-update-deployment-75db98fb4c,UID:a02b0ccc-c29c-11ea-b2c9-0242ac120008,ResourceVersion:6128,Generation:1,CreationTimestamp:2020-07-10 11:00:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 9ff4020c-c29c-11ea-b2c9-0242ac120008 0xc0010e4877 0xc0010e4878}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jul 10 11:01:01.500: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jul 10 11:01:01.501: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-6twk6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6twk6/replicasets/test-rolling-update-controller,UID:9a72ce6c-c29c-11ea-b2c9-0242ac120008,ResourceVersion:6136,Generation:2,CreationTimestamp:2020-07-10 11:00:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 9ff4020c-c29c-11ea-b2c9-0242ac120008 0xc0010e471f 0xc0010e4730}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 10 11:01:01.503: INFO: Pod "test-rolling-update-deployment-75db98fb4c-87btl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-87btl,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-6twk6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6twk6/pods/test-rolling-update-deployment-75db98fb4c-87btl,UID:a055438c-c29c-11ea-b2c9-0242ac120008,ResourceVersion:6127,Generation:0,CreationTimestamp:2020-07-10 11:00:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c a02b0ccc-c29c-11ea-b2c9-0242ac120008 0xc001b730f7 0xc001b730f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4wzf5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4wzf5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-4wzf5 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b73170} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b73190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:00:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:00:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:00:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:00:54 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.16,StartTime:2020-07-10 11:00:55 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-07-10 11:00:59 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://1b8e06df12b64cab769b3ec0e36b119c6fb3bb8cc75edeff1c34d8d10a7a28bd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:01:01.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-6twk6" for this suite. Jul 10 11:01:11.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:01:11.561: INFO: namespace: e2e-tests-deployment-6twk6, resource: bindings, ignored listing per whitelist Jul 10 11:01:11.753: INFO: namespace e2e-tests-deployment-6twk6 deletion completed in 10.248323049s • [SLOW TEST:27.403 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:01:11.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 10 11:01:12.163: INFO: Waiting up to 5m0s for pod "pod-aa9be9eb-c29c-11ea-a406-0242ac11000f" in namespace "e2e-tests-emptydir-8gpwv" to be "success or failure" Jul 10 11:01:12.209: INFO: Pod "pod-aa9be9eb-c29c-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 45.643137ms Jul 10 11:01:14.237: INFO: Pod "pod-aa9be9eb-c29c-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073267725s Jul 10 11:01:16.319: INFO: Pod "pod-aa9be9eb-c29c-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155433357s Jul 10 11:01:19.012: INFO: Pod "pod-aa9be9eb-c29c-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.84883772s Jul 10 11:01:21.063: INFO: Pod "pod-aa9be9eb-c29c-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.899847787s Jul 10 11:01:23.067: INFO: Pod "pod-aa9be9eb-c29c-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.903512079s Jul 10 11:01:25.071: INFO: Pod "pod-aa9be9eb-c29c-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.907441684s Jul 10 11:01:27.075: INFO: Pod "pod-aa9be9eb-c29c-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.911151096s Jul 10 11:01:29.079: INFO: Pod "pod-aa9be9eb-c29c-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.91520481s STEP: Saw pod success Jul 10 11:01:29.079: INFO: Pod "pod-aa9be9eb-c29c-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 11:01:29.081: INFO: Trying to get logs from node hunter-worker pod pod-aa9be9eb-c29c-11ea-a406-0242ac11000f container test-container: STEP: delete the pod Jul 10 11:01:30.602: INFO: Waiting for pod pod-aa9be9eb-c29c-11ea-a406-0242ac11000f to disappear Jul 10 11:01:31.010: INFO: Pod pod-aa9be9eb-c29c-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:01:31.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-8gpwv" for this suite. Jul 10 11:01:39.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:01:40.676: INFO: namespace: e2e-tests-emptydir-8gpwv, resource: bindings, ignored listing per whitelist Jul 10 11:01:40.829: INFO: namespace e2e-tests-emptydir-8gpwv deletion completed in 9.815967288s • [SLOW TEST:29.076 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:01:40.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating replication controller my-hostname-basic-bc184ebf-c29c-11ea-a406-0242ac11000f Jul 10 11:01:42.034: INFO: Pod name my-hostname-basic-bc184ebf-c29c-11ea-a406-0242ac11000f: Found 0 pods out of 1 Jul 10 11:01:47.088: INFO: Pod name my-hostname-basic-bc184ebf-c29c-11ea-a406-0242ac11000f: Found 1 pods out of 1 Jul 10 11:01:47.088: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-bc184ebf-c29c-11ea-a406-0242ac11000f" are running Jul 10 11:01:53.517: INFO: Pod "my-hostname-basic-bc184ebf-c29c-11ea-a406-0242ac11000f-x87x5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-10 11:01:42 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-10 11:01:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-bc184ebf-c29c-11ea-a406-0242ac11000f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-10 11:01:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-bc184ebf-c29c-11ea-a406-0242ac11000f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-10 11:01:42 +0000 UTC Reason: Message:}]) Jul 10 11:01:53.517: INFO: Trying to dial the pod Jul 10 11:01:58.527: INFO: Controller my-hostname-basic-bc184ebf-c29c-11ea-a406-0242ac11000f: Got expected result from replica 1 [my-hostname-basic-bc184ebf-c29c-11ea-a406-0242ac11000f-x87x5]: "my-hostname-basic-bc184ebf-c29c-11ea-a406-0242ac11000f-x87x5", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:01:58.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replication-controller-8fwfv" for this suite. Jul 10 11:02:06.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:02:06.897: INFO: namespace: e2e-tests-replication-controller-8fwfv, resource: bindings, ignored listing per whitelist Jul 10 11:02:06.922: INFO: namespace e2e-tests-replication-controller-8fwfv deletion completed in 8.39215012s • [SLOW TEST:26.093 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:02:06.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-cbb42b80-c29c-11ea-a406-0242ac11000f STEP: Creating a pod to test consume configMaps Jul 10 11:02:08.378: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cbf38af2-c29c-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-snk59" to be "success or failure" Jul 10 11:02:08.723: INFO: Pod "pod-projected-configmaps-cbf38af2-c29c-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 345.237489ms Jul 10 11:02:10.891: INFO: Pod "pod-projected-configmaps-cbf38af2-c29c-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.512836826s Jul 10 11:02:13.041: INFO: Pod "pod-projected-configmaps-cbf38af2-c29c-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.663079158s Jul 10 11:02:15.045: INFO: Pod "pod-projected-configmaps-cbf38af2-c29c-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.667131563s Jul 10 11:02:17.050: INFO: Pod "pod-projected-configmaps-cbf38af2-c29c-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 8.672145635s Jul 10 11:02:19.742: INFO: Pod "pod-projected-configmaps-cbf38af2-c29c-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.364230114s STEP: Saw pod success Jul 10 11:02:19.742: INFO: Pod "pod-projected-configmaps-cbf38af2-c29c-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 11:02:19.778: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-cbf38af2-c29c-11ea-a406-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Jul 10 11:02:20.562: INFO: Waiting for pod pod-projected-configmaps-cbf38af2-c29c-11ea-a406-0242ac11000f to disappear Jul 10 11:02:20.661: INFO: Pod pod-projected-configmaps-cbf38af2-c29c-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:02:20.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-snk59" for this suite. Jul 10 11:02:33.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:02:33.081: INFO: namespace: e2e-tests-projected-snk59, resource: bindings, ignored listing per whitelist Jul 10 11:02:33.086: INFO: namespace e2e-tests-projected-snk59 deletion completed in 12.332389587s • [SLOW TEST:26.163 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:02:33.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-dba9cba1-c29c-11ea-a406-0242ac11000f STEP: Creating a pod to test consume configMaps Jul 10 11:02:34.675: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dbb098ae-c29c-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-2bssc" to be "success or failure" Jul 10 11:02:34.907: INFO: Pod "pod-projected-configmaps-dbb098ae-c29c-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 231.55021ms Jul 10 11:02:36.987: INFO: Pod "pod-projected-configmaps-dbb098ae-c29c-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31227558s Jul 10 11:02:39.041: INFO: Pod "pod-projected-configmaps-dbb098ae-c29c-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.366105651s Jul 10 11:02:41.071: INFO: Pod "pod-projected-configmaps-dbb098ae-c29c-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.395485899s Jul 10 11:02:43.074: INFO: Pod "pod-projected-configmaps-dbb098ae-c29c-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.398825963s STEP: Saw pod success Jul 10 11:02:43.074: INFO: Pod "pod-projected-configmaps-dbb098ae-c29c-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 11:02:43.076: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-dbb098ae-c29c-11ea-a406-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Jul 10 11:02:43.185: INFO: Waiting for pod pod-projected-configmaps-dbb098ae-c29c-11ea-a406-0242ac11000f to disappear Jul 10 11:02:43.197: INFO: Pod pod-projected-configmaps-dbb098ae-c29c-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:02:43.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-2bssc" for this suite. Jul 10 11:02:49.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:02:49.281: INFO: namespace: e2e-tests-projected-2bssc, resource: bindings, ignored listing per whitelist Jul 10 11:02:49.309: INFO: namespace e2e-tests-projected-2bssc deletion completed in 6.10948982s • [SLOW TEST:16.223 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:02:49.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Jul 10 11:02:50.081: INFO: created pod pod-service-account-defaultsa Jul 10 11:02:50.081: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jul 10 11:02:50.097: INFO: created pod pod-service-account-mountsa Jul 10 11:02:50.097: INFO: pod pod-service-account-mountsa service account token volume mount: true Jul 10 11:02:50.108: INFO: created pod pod-service-account-nomountsa Jul 10 11:02:50.108: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jul 10 11:02:50.173: INFO: created pod pod-service-account-defaultsa-mountspec Jul 10 11:02:50.173: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jul 10 11:02:50.188: INFO: created pod pod-service-account-mountsa-mountspec Jul 10 11:02:50.188: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jul 10 11:02:50.236: INFO: created pod pod-service-account-nomountsa-mountspec Jul 10 11:02:50.236: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jul 10 11:02:50.335: INFO: created pod pod-service-account-defaultsa-nomountspec Jul 10 11:02:50.335: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jul 10 11:02:50.358: INFO: created pod pod-service-account-mountsa-nomountspec Jul 10 11:02:50.358: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jul 10 11:02:50.427: INFO: created pod pod-service-account-nomountsa-nomountspec Jul 10 11:02:50.427: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:02:50.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-f8sdq" for this suite. Jul 10 11:03:32.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:03:32.632: INFO: namespace: e2e-tests-svcaccounts-f8sdq, resource: bindings, ignored listing per whitelist Jul 10 11:03:32.685: INFO: namespace e2e-tests-svcaccounts-f8sdq deletion completed in 42.185230866s • [SLOW TEST:43.376 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:03:32.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jul 10 11:03:34.089: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-t4l59,SelfLink:/api/v1/namespaces/e2e-tests-watch-t4l59/configmaps/e2e-watch-test-resource-version,UID:fe87ae29-c29c-11ea-b2c9-0242ac120008,ResourceVersion:6723,Generation:0,CreationTimestamp:2020-07-10 11:03:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 10 11:03:34.089: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-t4l59,SelfLink:/api/v1/namespaces/e2e-tests-watch-t4l59/configmaps/e2e-watch-test-resource-version,UID:fe87ae29-c29c-11ea-b2c9-0242ac120008,ResourceVersion:6724,Generation:0,CreationTimestamp:2020-07-10 11:03:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:03:34.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-t4l59" for this suite. Jul 10 11:03:40.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:03:40.890: INFO: namespace: e2e-tests-watch-t4l59, resource: bindings, ignored listing per whitelist Jul 10 11:03:40.909: INFO: namespace e2e-tests-watch-t4l59 deletion completed in 6.585995169s • [SLOW TEST:8.224 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:03:40.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 10 11:03:41.014: INFO: Waiting up to 5m0s for pod "pod-03539d35-c29d-11ea-a406-0242ac11000f" in namespace "e2e-tests-emptydir-br574" to be "success or failure" Jul 10 11:03:41.021: INFO: Pod "pod-03539d35-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.186978ms Jul 10 11:03:43.179: INFO: Pod "pod-03539d35-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165639535s Jul 10 11:03:45.227: INFO: Pod "pod-03539d35-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21350165s Jul 10 11:03:47.229: INFO: Pod "pod-03539d35-c29d-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.215838593s STEP: Saw pod success Jul 10 11:03:47.229: INFO: Pod "pod-03539d35-c29d-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 11:03:47.231: INFO: Trying to get logs from node hunter-worker2 pod pod-03539d35-c29d-11ea-a406-0242ac11000f container test-container: STEP: delete the pod Jul 10 11:03:47.330: INFO: Waiting for pod pod-03539d35-c29d-11ea-a406-0242ac11000f to disappear Jul 10 11:03:47.369: INFO: Pod pod-03539d35-c29d-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:03:47.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-br574" for this suite. Jul 10 11:03:53.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:03:53.637: INFO: namespace: e2e-tests-emptydir-br574, resource: bindings, ignored listing per whitelist Jul 10 11:03:53.675: INFO: namespace e2e-tests-emptydir-br574 deletion completed in 6.302926077s • [SLOW TEST:12.765 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:03:53.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 10 11:03:53.900: INFO: Waiting up to 5m0s for pod "pod-0b077543-c29d-11ea-a406-0242ac11000f" in namespace "e2e-tests-emptydir-vdjd2" to be "success or failure" Jul 10 11:03:53.994: INFO: Pod "pod-0b077543-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 94.60036ms Jul 10 11:03:55.998: INFO: Pod "pod-0b077543-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098025174s Jul 10 11:03:58.006: INFO: Pod "pod-0b077543-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106367076s Jul 10 11:04:00.010: INFO: Pod "pod-0b077543-c29d-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.110513819s STEP: Saw pod success Jul 10 11:04:00.010: INFO: Pod "pod-0b077543-c29d-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 11:04:00.014: INFO: Trying to get logs from node hunter-worker pod pod-0b077543-c29d-11ea-a406-0242ac11000f container test-container: STEP: delete the pod Jul 10 11:04:00.275: INFO: Waiting for pod pod-0b077543-c29d-11ea-a406-0242ac11000f to disappear Jul 10 11:04:00.473: INFO: Pod pod-0b077543-c29d-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:04:00.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-vdjd2" for this suite. Jul 10 11:04:08.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:04:08.594: INFO: namespace: e2e-tests-emptydir-vdjd2, resource: bindings, ignored listing per whitelist Jul 10 11:04:08.639: INFO: namespace e2e-tests-emptydir-vdjd2 deletion completed in 8.162211973s • [SLOW TEST:14.964 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:04:08.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-downwardapi-jkps STEP: Creating a pod to test atomic-volume-subpath Jul 10 11:04:09.036: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-jkps" in namespace "e2e-tests-subpath-gfnpz" to be "success or failure" Jul 10 11:04:09.162: INFO: Pod "pod-subpath-test-downwardapi-jkps": Phase="Pending", Reason="", readiness=false. Elapsed: 125.680038ms Jul 10 11:04:11.207: INFO: Pod "pod-subpath-test-downwardapi-jkps": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171195481s Jul 10 11:04:13.212: INFO: Pod "pod-subpath-test-downwardapi-jkps": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175384453s Jul 10 11:04:15.216: INFO: Pod "pod-subpath-test-downwardapi-jkps": Phase="Pending", Reason="", readiness=false. Elapsed: 6.179518265s Jul 10 11:04:17.219: INFO: Pod "pod-subpath-test-downwardapi-jkps": Phase="Pending", Reason="", readiness=false. Elapsed: 8.182925114s Jul 10 11:04:19.223: INFO: Pod "pod-subpath-test-downwardapi-jkps": Phase="Running", Reason="", readiness=true. Elapsed: 10.186549923s Jul 10 11:04:21.226: INFO: Pod "pod-subpath-test-downwardapi-jkps": Phase="Running", Reason="", readiness=false. Elapsed: 12.189859407s Jul 10 11:04:23.230: INFO: Pod "pod-subpath-test-downwardapi-jkps": Phase="Running", Reason="", readiness=false. Elapsed: 14.193972231s Jul 10 11:04:25.234: INFO: Pod "pod-subpath-test-downwardapi-jkps": Phase="Running", Reason="", readiness=false. Elapsed: 16.197795155s Jul 10 11:04:27.238: INFO: Pod "pod-subpath-test-downwardapi-jkps": Phase="Running", Reason="", readiness=false. Elapsed: 18.201955851s Jul 10 11:04:29.243: INFO: Pod "pod-subpath-test-downwardapi-jkps": Phase="Running", Reason="", readiness=false. Elapsed: 20.20632127s Jul 10 11:04:31.246: INFO: Pod "pod-subpath-test-downwardapi-jkps": Phase="Running", Reason="", readiness=false. Elapsed: 22.209307196s Jul 10 11:04:33.250: INFO: Pod "pod-subpath-test-downwardapi-jkps": Phase="Running", Reason="", readiness=false. Elapsed: 24.21352307s Jul 10 11:04:35.492: INFO: Pod "pod-subpath-test-downwardapi-jkps": Phase="Running", Reason="", readiness=false. Elapsed: 26.455706864s Jul 10 11:04:37.497: INFO: Pod "pod-subpath-test-downwardapi-jkps": Phase="Running", Reason="", readiness=false. Elapsed: 28.460369827s Jul 10 11:04:39.501: INFO: Pod "pod-subpath-test-downwardapi-jkps": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.464806693s STEP: Saw pod success Jul 10 11:04:39.501: INFO: Pod "pod-subpath-test-downwardapi-jkps" satisfied condition "success or failure" Jul 10 11:04:39.504: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-jkps container test-container-subpath-downwardapi-jkps: STEP: delete the pod Jul 10 11:04:40.399: INFO: Waiting for pod pod-subpath-test-downwardapi-jkps to disappear Jul 10 11:04:40.660: INFO: Pod pod-subpath-test-downwardapi-jkps no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-jkps Jul 10 11:04:40.660: INFO: Deleting pod "pod-subpath-test-downwardapi-jkps" in namespace "e2e-tests-subpath-gfnpz" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:04:40.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-gfnpz" for this suite. Jul 10 11:04:47.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:04:47.082: INFO: namespace: e2e-tests-subpath-gfnpz, resource: bindings, ignored listing per whitelist Jul 10 11:04:47.089: INFO: namespace e2e-tests-subpath-gfnpz deletion completed in 6.422128147s • [SLOW TEST:38.450 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:04:47.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Jul 10 11:04:47.217: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-469hd" to be "success or failure" Jul 10 11:04:47.227: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.043336ms Jul 10 11:04:49.624: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.406720304s Jul 10 11:04:51.689: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471999243s Jul 10 11:04:54.615: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 7.398212885s Jul 10 11:04:56.618: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.400956141s STEP: Saw pod success Jul 10 11:04:56.618: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jul 10 11:04:56.620: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Jul 10 11:04:56.778: INFO: Waiting for pod pod-host-path-test to disappear Jul 10 11:04:56.787: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:04:56.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-469hd" for this suite. Jul 10 11:05:04.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:05:04.899: INFO: namespace: e2e-tests-hostpath-469hd, resource: bindings, ignored listing per whitelist Jul 10 11:05:04.911: INFO: namespace e2e-tests-hostpath-469hd deletion completed in 8.122484065s • [SLOW TEST:17.821 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:05:04.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jul 10 11:05:17.443: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-356afb97-c29d-11ea-a406-0242ac11000f,GenerateName:,Namespace:e2e-tests-events-bf8sb,SelfLink:/api/v1/namespaces/e2e-tests-events-bf8sb/pods/send-events-356afb97-c29d-11ea-a406-0242ac11000f,UID:356c1f2a-c29d-11ea-b2c9-0242ac120008,ResourceVersion:7163,Generation:0,CreationTimestamp:2020-07-10 11:05:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 7917377,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-2tmm4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tmm4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-2tmm4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e06820} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e06840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:05:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:05:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:05:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:05:05 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.24,StartTime:2020-07-10 11:05:05 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-07-10 11:05:14 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://2c5a035433364c3307b515130a58a028b593f5631deb728f730f9bf48570f1b7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jul 10 11:05:19.517: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jul 10 11:05:21.601: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:05:21.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-events-bf8sb" for this suite. Jul 10 11:06:01.817: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:06:01.878: INFO: namespace: e2e-tests-events-bf8sb, resource: bindings, ignored listing per whitelist Jul 10 11:06:01.882: INFO: namespace e2e-tests-events-bf8sb deletion completed in 40.101951522s • [SLOW TEST:56.971 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:06:01.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jul 10 11:06:30.711: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:06:32.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-4v7hh" for this suite. Jul 10 11:07:01.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:07:01.251: INFO: namespace: e2e-tests-replicaset-4v7hh, resource: bindings, ignored listing per whitelist Jul 10 11:07:01.294: INFO: namespace e2e-tests-replicaset-4v7hh deletion completed in 28.427347949s • [SLOW TEST:59.412 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:07:01.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:07:05.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-sjnhp" for this suite. Jul 10 11:07:11.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:07:11.631: INFO: namespace: e2e-tests-emptydir-wrapper-sjnhp, resource: bindings, ignored listing per whitelist Jul 10 11:07:11.636: INFO: namespace e2e-tests-emptydir-wrapper-sjnhp deletion completed in 6.066906874s • [SLOW TEST:10.342 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:07:11.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-80f94b84-c29d-11ea-a406-0242ac11000f STEP: Creating a pod to test consume secrets Jul 10 11:07:11.798: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-80fc8fc7-c29d-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-56zhc" to be "success or failure" Jul 10 11:07:11.803: INFO: Pod "pod-projected-secrets-80fc8fc7-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.548654ms Jul 10 11:07:13.860: INFO: Pod "pod-projected-secrets-80fc8fc7-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061652652s Jul 10 11:07:15.864: INFO: Pod "pod-projected-secrets-80fc8fc7-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065527974s Jul 10 11:07:17.867: INFO: Pod "pod-projected-secrets-80fc8fc7-c29d-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068987058s STEP: Saw pod success Jul 10 11:07:17.867: INFO: Pod "pod-projected-secrets-80fc8fc7-c29d-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 11:07:17.870: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-80fc8fc7-c29d-11ea-a406-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Jul 10 11:07:19.128: INFO: Waiting for pod pod-projected-secrets-80fc8fc7-c29d-11ea-a406-0242ac11000f to disappear Jul 10 11:07:19.144: INFO: Pod pod-projected-secrets-80fc8fc7-c29d-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:07:19.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-56zhc" for this suite. Jul 10 11:07:25.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:07:25.305: INFO: namespace: e2e-tests-projected-56zhc, resource: bindings, ignored listing per whitelist Jul 10 11:07:25.349: INFO: namespace e2e-tests-projected-56zhc deletion completed in 6.202389776s • [SLOW TEST:13.713 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:07:25.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 10 11:07:26.491: INFO: Waiting up to 5m0s for pod "pod-89aaa3f3-c29d-11ea-a406-0242ac11000f" in namespace "e2e-tests-emptydir-n5xhq" to be "success or failure" Jul 10 11:07:26.543: INFO: Pod "pod-89aaa3f3-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 52.679364ms Jul 10 11:07:28.554: INFO: Pod "pod-89aaa3f3-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063624485s Jul 10 11:07:30.813: INFO: Pod "pod-89aaa3f3-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.322038866s Jul 10 11:07:32.885: INFO: Pod "pod-89aaa3f3-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.394084479s Jul 10 11:07:34.945: INFO: Pod "pod-89aaa3f3-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.454238255s Jul 10 11:07:38.496: INFO: Pod "pod-89aaa3f3-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.004781739s Jul 10 11:07:41.041: INFO: Pod "pod-89aaa3f3-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.550235623s Jul 10 11:07:43.043: INFO: Pod "pod-89aaa3f3-c29d-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.552672518s STEP: Saw pod success Jul 10 11:07:43.043: INFO: Pod "pod-89aaa3f3-c29d-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 11:07:43.045: INFO: Trying to get logs from node hunter-worker2 pod pod-89aaa3f3-c29d-11ea-a406-0242ac11000f container test-container: STEP: delete the pod Jul 10 11:07:43.068: INFO: Waiting for pod pod-89aaa3f3-c29d-11ea-a406-0242ac11000f to disappear Jul 10 11:07:43.111: INFO: Pod pod-89aaa3f3-c29d-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:07:43.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-n5xhq" for this suite. Jul 10 11:07:51.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:07:51.157: INFO: namespace: e2e-tests-emptydir-n5xhq, resource: bindings, ignored listing per whitelist Jul 10 11:07:51.242: INFO: namespace e2e-tests-emptydir-n5xhq deletion completed in 8.128112672s • [SLOW TEST:25.892 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:07:51.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 10 11:07:51.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jul 10 11:07:51.850: INFO: stderr: "" Jul 10 11:07:51.850: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-07-10T10:25:27Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:50:51Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:07:51.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8ld8l" for this suite. Jul 10 11:07:57.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:07:57.886: INFO: namespace: e2e-tests-kubectl-8ld8l, resource: bindings, ignored listing per whitelist Jul 10 11:07:57.991: INFO: namespace e2e-tests-kubectl-8ld8l deletion completed in 6.134764431s • [SLOW TEST:6.749 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:07:57.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 10 11:07:58.570: INFO: Waiting up to 5m0s for pod "pod-9cda5c58-c29d-11ea-a406-0242ac11000f" in namespace "e2e-tests-emptydir-krpsp" to be "success or failure" Jul 10 11:07:58.611: INFO: Pod "pod-9cda5c58-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 40.688861ms Jul 10 11:08:00.615: INFO: Pod "pod-9cda5c58-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044581832s Jul 10 11:08:02.766: INFO: Pod "pod-9cda5c58-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195563697s Jul 10 11:08:04.975: INFO: Pod "pod-9cda5c58-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.404497902s Jul 10 11:08:06.978: INFO: Pod "pod-9cda5c58-c29d-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.408136101s STEP: Saw pod success Jul 10 11:08:06.978: INFO: Pod "pod-9cda5c58-c29d-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 11:08:06.981: INFO: Trying to get logs from node hunter-worker pod pod-9cda5c58-c29d-11ea-a406-0242ac11000f container test-container: STEP: delete the pod Jul 10 11:08:07.024: INFO: Waiting for pod pod-9cda5c58-c29d-11ea-a406-0242ac11000f to disappear Jul 10 11:08:07.038: INFO: Pod pod-9cda5c58-c29d-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:08:07.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-krpsp" for this suite. Jul 10 11:08:17.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:08:17.207: INFO: namespace: e2e-tests-emptydir-krpsp, resource: bindings, ignored listing per whitelist Jul 10 11:08:17.209: INFO: namespace e2e-tests-emptydir-krpsp deletion completed in 10.150664001s • [SLOW TEST:19.218 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:08:17.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 10 11:08:17.395: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a81624d6-c29d-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-prznr" to be "success or failure" Jul 10 11:08:17.403: INFO: Pod "downwardapi-volume-a81624d6-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.463572ms Jul 10 11:08:19.748: INFO: Pod "downwardapi-volume-a81624d6-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.352314949s Jul 10 11:08:21.826: INFO: Pod "downwardapi-volume-a81624d6-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.430703484s Jul 10 11:08:24.012: INFO: Pod "downwardapi-volume-a81624d6-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.617210795s Jul 10 11:08:26.047: INFO: Pod "downwardapi-volume-a81624d6-c29d-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.65185088s STEP: Saw pod success Jul 10 11:08:26.047: INFO: Pod "downwardapi-volume-a81624d6-c29d-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 11:08:26.066: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-a81624d6-c29d-11ea-a406-0242ac11000f container client-container: STEP: delete the pod Jul 10 11:08:26.863: INFO: Waiting for pod downwardapi-volume-a81624d6-c29d-11ea-a406-0242ac11000f to disappear Jul 10 11:08:26.874: INFO: Pod downwardapi-volume-a81624d6-c29d-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:08:26.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-prznr" for this suite. Jul 10 11:08:34.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:08:35.009: INFO: namespace: e2e-tests-projected-prznr, resource: bindings, ignored listing per whitelist Jul 10 11:08:35.019: INFO: namespace e2e-tests-projected-prznr deletion completed in 8.143014895s • [SLOW TEST:17.810 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:08:35.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 10 11:08:35.203: INFO: Creating deployment "nginx-deployment" Jul 10 11:08:35.225: INFO: Waiting for observed generation 1 Jul 10 11:08:37.712: INFO: Waiting for all required pods to come up Jul 10 11:08:37.716: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jul 10 11:08:54.844: INFO: Waiting for deployment "nginx-deployment" to complete Jul 10 11:08:54.848: INFO: Updating deployment "nginx-deployment" with a non-existent image Jul 10 11:08:54.855: INFO: Updating deployment nginx-deployment Jul 10 11:08:54.855: INFO: Waiting for observed generation 2 Jul 10 11:08:57.466: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jul 10 11:08:58.047: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jul 10 11:08:58.062: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jul 10 11:08:59.263: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jul 10 11:08:59.263: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jul 10 11:08:59.337: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jul 10 11:08:59.342: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jul 10 11:08:59.342: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jul 10 11:08:59.347: INFO: Updating deployment nginx-deployment Jul 10 11:08:59.347: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jul 10 11:08:59.553: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jul 10 11:08:59.636: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jul 10 11:09:00.248: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p6tt8/deployments/nginx-deployment,UID:b2b48b31-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8325,Generation:3,CreationTimestamp:2020-07-10 11:08:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[{Progressing True 2020-07-10 11:08:57 +0000 UTC 2020-07-10 11:08:35 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-07-10 11:08:59 +0000 UTC 2020-07-10 11:08:59 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Jul 10 11:09:00.329: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p6tt8/replicasets/nginx-deployment-5c98f8fb5,UID:be6b11f5-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8323,Generation:3,CreationTimestamp:2020-07-10 11:08:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment b2b48b31-c29d-11ea-b2c9-0242ac120008 0xc001c08fa7 0xc001c08fa8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 10 11:09:00.330: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jul 10 11:09:00.330: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-p6tt8/replicasets/nginx-deployment-85ddf47c5d,UID:b2b8a87d-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8366,Generation:3,CreationTimestamp:2020-07-10 11:08:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment b2b48b31-c29d-11ea-b2c9-0242ac120008 0xc001c09067 0xc001c09068}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jul 10 11:09:00.415: INFO: Pod "nginx-deployment-5c98f8fb5-2zcrx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2zcrx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-5c98f8fb5-2zcrx,UID:c1a7b2a7-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8385,Generation:0,CreationTimestamp:2020-07-10 11:09:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 be6b11f5-c29d-11ea-b2c9-0242ac120008 0xc001c09cd0 0xc001c09cd1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c09d50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c09d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:09:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.415: INFO: Pod "nginx-deployment-5c98f8fb5-62tfm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-62tfm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-5c98f8fb5-62tfm,UID:be740801-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8289,Generation:0,CreationTimestamp:2020-07-10 11:08:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 be6b11f5-c29d-11ea-b2c9-0242ac120008 0xc001c09de0 0xc001c09de1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c09e60} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c09e80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:55 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-07-10 11:08:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.415: INFO: Pod "nginx-deployment-5c98f8fb5-8vn4z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-8vn4z,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-5c98f8fb5-8vn4z,UID:c15560ea-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8348,Generation:0,CreationTimestamp:2020-07-10 11:08:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 be6b11f5-c29d-11ea-b2c9-0242ac120008 0xc001c09f40 0xc001c09f41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001c09fd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001c09ff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:09:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.415: INFO: Pod "nginx-deployment-5c98f8fb5-djr4j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-djr4j,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-5c98f8fb5-djr4j,UID:c1a0d9b6-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8374,Generation:0,CreationTimestamp:2020-07-10 11:09:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 be6b11f5-c29d-11ea-b2c9-0242ac120008 0xc001e9c1e0 0xc001e9c1e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e9c260} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9c280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:09:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.415: INFO: Pod "nginx-deployment-5c98f8fb5-ftdpt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ftdpt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-5c98f8fb5-ftdpt,UID:be740b48-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8300,Generation:0,CreationTimestamp:2020-07-10 11:08:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 be6b11f5-c29d-11ea-b2c9-0242ac120008 0xc001e9c490 0xc001e9c491}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e9c510} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9c530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:55 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-07-10 11:08:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.415: INFO: Pod "nginx-deployment-5c98f8fb5-h5c7j" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-h5c7j,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-5c98f8fb5-h5c7j,UID:c1552e00-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8346,Generation:0,CreationTimestamp:2020-07-10 11:08:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 be6b11f5-c29d-11ea-b2c9-0242ac120008 0xc001e9c600 0xc001e9c601}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e9c730} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9c750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:09:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.415: INFO: Pod "nginx-deployment-5c98f8fb5-h5s9z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-h5s9z,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-5c98f8fb5-h5s9z,UID:be71fc53-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8386,Generation:0,CreationTimestamp:2020-07-10 11:08:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 be6b11f5-c29d-11ea-b2c9-0242ac120008 0xc001e9c7c0 0xc001e9c7c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e9c850} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9c870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:54 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.36,StartTime:2020-07-10 11:08:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.416: INFO: Pod "nginx-deployment-5c98f8fb5-hzpbk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-hzpbk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-5c98f8fb5-hzpbk,UID:c1a0e394-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8375,Generation:0,CreationTimestamp:2020-07-10 11:09:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 be6b11f5-c29d-11ea-b2c9-0242ac120008 0xc001e9ca00 0xc001e9ca01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e9ca80} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9cb40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:09:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.416: INFO: Pod "nginx-deployment-5c98f8fb5-l7xbc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-l7xbc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-5c98f8fb5-l7xbc,UID:c1a0dfa8-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8376,Generation:0,CreationTimestamp:2020-07-10 11:09:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 be6b11f5-c29d-11ea-b2c9-0242ac120008 0xc001e9cdb0 0xc001e9cdb1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e9ce30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9ce50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:09:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.416: INFO: Pod "nginx-deployment-5c98f8fb5-nwtjp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nwtjp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-5c98f8fb5-nwtjp,UID:bedc95ab-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8305,Generation:0,CreationTimestamp:2020-07-10 11:08:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 be6b11f5-c29d-11ea-b2c9-0242ac120008 0xc001e9cfc0 0xc001e9cfc1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e9d040} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9d060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:56 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-07-10 11:08:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.416: INFO: Pod "nginx-deployment-5c98f8fb5-nxx8f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nxx8f,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-5c98f8fb5-nxx8f,UID:bf2eb077-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8316,Generation:0,CreationTimestamp:2020-07-10 11:08:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 be6b11f5-c29d-11ea-b2c9-0242ac120008 0xc001e9d2e0 0xc001e9d2e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e9d360} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9d3e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:56 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-07-10 11:08:56 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.416: INFO: Pod "nginx-deployment-5c98f8fb5-shqgk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-shqgk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-5c98f8fb5-shqgk,UID:c1a0daab-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8373,Generation:0,CreationTimestamp:2020-07-10 11:09:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 be6b11f5-c29d-11ea-b2c9-0242ac120008 0xc001e9d590 0xc001e9d591}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e9d610} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9d630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:09:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.416: INFO: Pod "nginx-deployment-5c98f8fb5-vljth" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-vljth,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-5c98f8fb5-vljth,UID:c1449513-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8336,Generation:0,CreationTimestamp:2020-07-10 11:08:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 be6b11f5-c29d-11ea-b2c9-0242ac120008 0xc001e9d710 0xc001e9d711}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e9d850} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9d870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:59 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.416: INFO: Pod "nginx-deployment-85ddf47c5d-2dnqn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2dnqn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-85ddf47c5d-2dnqn,UID:c1449a59-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8343,Generation:0,CreationTimestamp:2020-07-10 11:08:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b2b8a87d-c29d-11ea-b2c9-0242ac120008 0xc001e9d8e0 0xc001e9d8e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e9da50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9da70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:59 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.417: INFO: Pod "nginx-deployment-85ddf47c5d-4nxrg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4nxrg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-85ddf47c5d-4nxrg,UID:b2ca3cdd-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8210,Generation:0,CreationTimestamp:2020-07-10 11:08:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b2b8a87d-c29d-11ea-b2c9-0242ac120008 0xc001e9dae0 0xc001e9dae1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e9dc20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9dc40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:35 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.41,StartTime:2020-07-10 11:08:35 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-10 11:08:49 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://cc955549edf071f1179ee6a9feea07cee2e94e33f2f7525267782ad12f154e6c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.417: INFO: Pod "nginx-deployment-85ddf47c5d-6n29l" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6n29l,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-85ddf47c5d-6n29l,UID:b2c8b498-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8238,Generation:0,CreationTimestamp:2020-07-10 11:08:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b2b8a87d-c29d-11ea-b2c9-0242ac120008 0xc001e9dd00 0xc001e9dd01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e9dd70} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9dd90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:35 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.44,StartTime:2020-07-10 11:08:35 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-10 11:08:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a7129b2ca43ae36534aef0ed2a704de36bece9e8f7dee0684141e5cbcea36234}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.417: INFO: Pod "nginx-deployment-85ddf47c5d-74h8f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-74h8f,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-85ddf47c5d-74h8f,UID:c15547fd-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8350,Generation:0,CreationTimestamp:2020-07-10 11:08:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b2b8a87d-c29d-11ea-b2c9-0242ac120008 0xc001e9de50 0xc001e9de51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e9dec0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9dee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:09:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.417: INFO: Pod "nginx-deployment-85ddf47c5d-74r6c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-74r6c,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-85ddf47c5d-74r6c,UID:c15558e7-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8351,Generation:0,CreationTimestamp:2020-07-10 11:08:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b2b8a87d-c29d-11ea-b2c9-0242ac120008 0xc001e9df50 0xc001e9df51}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001e9dfc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001e9dfe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:09:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.417: INFO: Pod "nginx-deployment-85ddf47c5d-75vvm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-75vvm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-85ddf47c5d-75vvm,UID:b2cf9653-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8213,Generation:0,CreationTimestamp:2020-07-10 11:08:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b2b8a87d-c29d-11ea-b2c9-0242ac120008 0xc001f4e050 0xc001f4e051}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f4e0c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f4e0e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:35 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.42,StartTime:2020-07-10 11:08:35 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-10 11:08:49 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://788fe87dae859c58542a9a97d15d02854689100d01618d17284657b8efe5a1c2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.417: INFO: Pod "nginx-deployment-85ddf47c5d-bj86w" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bj86w,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-85ddf47c5d-bj86w,UID:b2c88e03-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8223,Generation:0,CreationTimestamp:2020-07-10 11:08:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b2b8a87d-c29d-11ea-b2c9-0242ac120008 0xc001f4e210 0xc001f4e211}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f4e280} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f4e2a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:35 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.30,StartTime:2020-07-10 11:08:35 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-10 11:08:49 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7b921e42e5315f374e708efa98f0e262b5aedbbe52ddad25f9646d56663df560}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.418: INFO: Pod "nginx-deployment-85ddf47c5d-cvtvr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cvtvr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-85ddf47c5d-cvtvr,UID:b2ca3318-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8191,Generation:0,CreationTimestamp:2020-07-10 11:08:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b2b8a87d-c29d-11ea-b2c9-0242ac120008 0xc001f4e410 0xc001f4e411}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f4e480} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f4e4a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:35 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.40,StartTime:2020-07-10 11:08:35 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-10 11:08:46 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://81376ad0eda42cc6e9587c42eb14a1b37e4a75e7165c1b6a2795106f8eaae8ac}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.418: INFO: Pod "nginx-deployment-85ddf47c5d-f7gjl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-f7gjl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-85ddf47c5d-f7gjl,UID:b2ca341f-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8216,Generation:0,CreationTimestamp:2020-07-10 11:08:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b2b8a87d-c29d-11ea-b2c9-0242ac120008 0xc001f4e6a0 0xc001f4e6a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f4e710} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f4e730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:35 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.32,StartTime:2020-07-10 11:08:35 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-10 11:08:49 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://bb0de4e004a6688c37c8daa6b0afadb5d7bb84d75e2191ebd8f2976013610bdb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.418: INFO: Pod "nginx-deployment-85ddf47c5d-fj6wz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fj6wz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-85ddf47c5d-fj6wz,UID:c1555637-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8353,Generation:0,CreationTimestamp:2020-07-10 11:08:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b2b8a87d-c29d-11ea-b2c9-0242ac120008 0xc001f4e880 0xc001f4e881}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f4e8f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f4e910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:09:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.418: INFO: Pod "nginx-deployment-85ddf47c5d-fntzs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fntzs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-85ddf47c5d-fntzs,UID:c1555eb9-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8361,Generation:0,CreationTimestamp:2020-07-10 11:08:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b2b8a87d-c29d-11ea-b2c9-0242ac120008 0xc001f4e980 0xc001f4e981}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f4e9f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f4ea10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:09:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.418: INFO: Pod "nginx-deployment-85ddf47c5d-kmh2z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kmh2z,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-85ddf47c5d-kmh2z,UID:c138174a-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8347,Generation:0,CreationTimestamp:2020-07-10 11:08:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b2b8a87d-c29d-11ea-b2c9-0242ac120008 0xc001f4ea80 0xc001f4ea81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f4eaf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f4eb10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:59 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:59 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-07-10 11:08:59 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.418: INFO: Pod "nginx-deployment-85ddf47c5d-mb8lx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mb8lx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-85ddf47c5d-mb8lx,UID:c1a099c3-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8365,Generation:0,CreationTimestamp:2020-07-10 11:09:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b2b8a87d-c29d-11ea-b2c9-0242ac120008 0xc001f4ebc0 0xc001f4ebc1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f4ec30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f4ec50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:09:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.418: INFO: Pod "nginx-deployment-85ddf47c5d-nhmpq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nhmpq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-85ddf47c5d-nhmpq,UID:c1a0c4df-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8377,Generation:0,CreationTimestamp:2020-07-10 11:09:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b2b8a87d-c29d-11ea-b2c9-0242ac120008 0xc001f4ecc0 0xc001f4ecc1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f4ed30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f4ed50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:09:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.419: INFO: Pod "nginx-deployment-85ddf47c5d-qb5kq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qb5kq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-85ddf47c5d-qb5kq,UID:c1a0d63a-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8378,Generation:0,CreationTimestamp:2020-07-10 11:09:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b2b8a87d-c29d-11ea-b2c9-0242ac120008 0xc001f4edc0 0xc001f4edc1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f4ee30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f4ee50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:09:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.419: INFO: Pod "nginx-deployment-85ddf47c5d-qglm6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qglm6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-85ddf47c5d-qglm6,UID:b2ca26a7-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8227,Generation:0,CreationTimestamp:2020-07-10 11:08:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b2b8a87d-c29d-11ea-b2c9-0242ac120008 0xc001f4eec0 0xc001f4eec1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f4ef30} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f4ef50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:35 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.31,StartTime:2020-07-10 11:08:35 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-10 11:08:49 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8594837cea9d77116ba54690c6f1fd53c428f5cc06e1eacde1981e31c436b18d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.419: INFO: Pod "nginx-deployment-85ddf47c5d-qljv8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qljv8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-85ddf47c5d-qljv8,UID:c1a0d5c1-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8372,Generation:0,CreationTimestamp:2020-07-10 11:09:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b2b8a87d-c29d-11ea-b2c9-0242ac120008 0xc001f4f010 0xc001f4f011}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f4f090} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f4f0b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:09:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.419: INFO: Pod "nginx-deployment-85ddf47c5d-v85nq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-v85nq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-85ddf47c5d-v85nq,UID:b2c8171a-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8234,Generation:0,CreationTimestamp:2020-07-10 11:08:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b2b8a87d-c29d-11ea-b2c9-0242ac120008 0xc001f4f140 0xc001f4f141}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f4f1c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f4f1e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:35 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:35 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.43,StartTime:2020-07-10 11:08:35 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-10 11:08:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3131cef23398993da89494d33f08fca2703231e8bb5156fac55c9b8388ca2392}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.419: INFO: Pod "nginx-deployment-85ddf47c5d-wl9zh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wl9zh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-85ddf47c5d-wl9zh,UID:c14486f7-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8335,Generation:0,CreationTimestamp:2020-07-10 11:08:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b2b8a87d-c29d-11ea-b2c9-0242ac120008 0xc001f4f2a0 0xc001f4f2a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f4f320} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f4f350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:08:59 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jul 10 11:09:00.419: INFO: Pod "nginx-deployment-85ddf47c5d-z2jcb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-z2jcb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-p6tt8,SelfLink:/api/v1/namespaces/e2e-tests-deployment-p6tt8/pods/nginx-deployment-85ddf47c5d-z2jcb,UID:c1a0c474-c29d-11ea-b2c9-0242ac120008,ResourceVersion:8368,Generation:0,CreationTimestamp:2020-07-10 11:09:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d b2b8a87d-c29d-11ea-b2c9-0242ac120008 0xc001f4f3f0 0xc001f4f3f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9srvt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9srvt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-9srvt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001f4f460} {node.kubernetes.io/unreachable Exists NoExecute 0xc001f4f480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:09:00 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:09:00.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-p6tt8" for this suite. Jul 10 11:09:38.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:09:38.292: INFO: namespace: e2e-tests-deployment-p6tt8, resource: bindings, ignored listing per whitelist Jul 10 11:09:38.351: INFO: namespace e2e-tests-deployment-p6tt8 deletion completed in 37.756812127s • [SLOW TEST:63.331 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:09:38.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-d865f26a-c29d-11ea-a406-0242ac11000f STEP: Creating a pod to test consume configMaps Jul 10 11:09:38.491: INFO: Waiting up to 5m0s for pod "pod-configmaps-d86c73f4-c29d-11ea-a406-0242ac11000f" in namespace "e2e-tests-configmap-tqmvf" to be "success or failure" Jul 10 11:09:38.496: INFO: Pod "pod-configmaps-d86c73f4-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.587486ms Jul 10 11:09:40.551: INFO: Pod "pod-configmaps-d86c73f4-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059696471s Jul 10 11:09:42.554: INFO: Pod "pod-configmaps-d86c73f4-c29d-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062910625s Jul 10 11:09:44.558: INFO: Pod "pod-configmaps-d86c73f4-c29d-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.067485236s STEP: Saw pod success Jul 10 11:09:44.559: INFO: Pod "pod-configmaps-d86c73f4-c29d-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 11:09:44.562: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-d86c73f4-c29d-11ea-a406-0242ac11000f container configmap-volume-test: STEP: delete the pod Jul 10 11:09:44.959: INFO: Waiting for pod pod-configmaps-d86c73f4-c29d-11ea-a406-0242ac11000f to disappear Jul 10 11:09:44.961: INFO: Pod pod-configmaps-d86c73f4-c29d-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:09:44.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-tqmvf" for this suite. Jul 10 11:09:51.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:09:51.177: INFO: namespace: e2e-tests-configmap-tqmvf, resource: bindings, ignored listing per whitelist Jul 10 11:09:51.796: INFO: namespace e2e-tests-configmap-tqmvf deletion completed in 6.759972374s • [SLOW TEST:13.445 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:09:51.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Creating an uninitialized pod in the namespace Jul 10 11:10:00.084: INFO: error from create uninitialized namespace: STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:10:30.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-g9p2f" for this suite. Jul 10 11:10:38.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:10:38.786: INFO: namespace: e2e-tests-namespaces-g9p2f, resource: bindings, ignored listing per whitelist Jul 10 11:10:39.485: INFO: namespace e2e-tests-namespaces-g9p2f deletion completed in 9.454681321s STEP: Destroying namespace "e2e-tests-nsdeletetest-67d6z" for this suite. Jul 10 11:10:39.487: INFO: Namespace e2e-tests-nsdeletetest-67d6z was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-kbhl9" for this suite. Jul 10 11:10:45.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:10:45.861: INFO: namespace: e2e-tests-nsdeletetest-kbhl9, resource: bindings, ignored listing per whitelist Jul 10 11:10:45.906: INFO: namespace e2e-tests-nsdeletetest-kbhl9 deletion completed in 6.418903785s • [SLOW TEST:54.110 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:10:45.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 10 11:10:48.174: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 10 11:10:48.177: INFO: Number of nodes with available pods: 0 Jul 10 11:10:48.177: INFO: Node hunter-worker is running more than one daemon pod Jul 10 11:10:49.423: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 10 11:10:49.443: INFO: Number of nodes with available pods: 0 Jul 10 11:10:49.443: INFO: Node hunter-worker is running more than one daemon pod Jul 10 11:10:50.181: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 10 11:10:50.185: INFO: Number of nodes with available pods: 0 Jul 10 11:10:50.185: INFO: Node hunter-worker is running more than one daemon pod Jul 10 11:10:51.608: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 10 11:10:52.027: INFO: Number of nodes with available pods: 0 Jul 10 11:10:52.027: INFO: Node hunter-worker is running more than one daemon pod Jul 10 11:10:52.416: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 10 11:10:52.419: INFO: Number of nodes with available pods: 0 Jul 10 11:10:52.419: INFO: Node hunter-worker is running more than one daemon pod Jul 10 11:10:53.182: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 10 11:10:53.185: INFO: Number of nodes with available pods: 0 Jul 10 11:10:53.185: INFO: Node hunter-worker is running more than one daemon pod Jul 10 11:10:54.236: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 10 11:10:54.239: INFO: Number of nodes with available pods: 0 Jul 10 11:10:54.239: INFO: Node hunter-worker is running more than one daemon pod Jul 10 11:10:55.500: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 10 11:10:55.503: INFO: Number of nodes with available pods: 2 Jul 10 11:10:55.503: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jul 10 11:10:56.151: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 10 11:10:56.679: INFO: Number of nodes with available pods: 1 Jul 10 11:10:56.679: INFO: Node hunter-worker is running more than one daemon pod Jul 10 11:10:57.842: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 10 11:10:57.844: INFO: Number of nodes with available pods: 1 Jul 10 11:10:57.844: INFO: Node hunter-worker is running more than one daemon pod Jul 10 11:10:58.769: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 10 11:10:58.857: INFO: Number of nodes with available pods: 1 Jul 10 11:10:58.857: INFO: Node hunter-worker is running more than one daemon pod Jul 10 11:11:00.197: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 10 11:11:00.200: INFO: Number of nodes with available pods: 1 Jul 10 11:11:00.200: INFO: Node hunter-worker is running more than one daemon pod Jul 10 11:11:00.691: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 10 11:11:00.694: INFO: Number of nodes with available pods: 1 Jul 10 11:11:00.694: INFO: Node hunter-worker is running more than one daemon pod Jul 10 11:11:01.684: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 10 11:11:01.688: INFO: Number of nodes with available pods: 1 Jul 10 11:11:01.688: INFO: Node hunter-worker is running more than one daemon pod Jul 10 11:11:02.714: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 10 11:11:02.717: INFO: Number of nodes with available pods: 1 Jul 10 11:11:02.717: INFO: Node hunter-worker is running more than one daemon pod Jul 10 11:11:04.730: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 10 11:11:04.768: INFO: Number of nodes with available pods: 2 Jul 10 11:11:04.768: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-k46vr, will wait for the garbage collector to delete the pods Jul 10 11:11:04.987: INFO: Deleting DaemonSet.extensions daemon-set took: 6.20591ms Jul 10 11:11:06.487: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.500236032s Jul 10 11:11:18.704: INFO: Number of nodes with available pods: 0 Jul 10 11:11:18.704: INFO: Number of running nodes: 0, number of available pods: 0 Jul 10 11:11:18.707: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-k46vr/daemonsets","resourceVersion":"9175"},"items":null} Jul 10 11:11:18.709: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-k46vr/pods","resourceVersion":"9175"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:11:18.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-k46vr" for this suite. Jul 10 11:11:24.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:11:24.767: INFO: namespace: e2e-tests-daemonsets-k46vr, resource: bindings, ignored listing per whitelist Jul 10 11:11:24.843: INFO: namespace e2e-tests-daemonsets-k46vr deletion completed in 6.12431501s • [SLOW TEST:38.937 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:11:24.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 10 11:11:25.698: INFO: Waiting up to 5m0s for pod "pod-18502464-c29e-11ea-a406-0242ac11000f" in namespace "e2e-tests-emptydir-56m4n" to be "success or failure" Jul 10 11:11:25.714: INFO: Pod "pod-18502464-c29e-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.399614ms Jul 10 11:11:27.727: INFO: Pod "pod-18502464-c29e-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029211951s Jul 10 11:11:29.731: INFO: Pod "pod-18502464-c29e-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 4.032456214s Jul 10 11:11:31.735: INFO: Pod "pod-18502464-c29e-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036498068s STEP: Saw pod success Jul 10 11:11:31.735: INFO: Pod "pod-18502464-c29e-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 11:11:31.737: INFO: Trying to get logs from node hunter-worker2 pod pod-18502464-c29e-11ea-a406-0242ac11000f container test-container: STEP: delete the pod Jul 10 11:11:31.861: INFO: Waiting for pod pod-18502464-c29e-11ea-a406-0242ac11000f to disappear Jul 10 11:11:31.906: INFO: Pod pod-18502464-c29e-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:11:31.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-56m4n" for this suite. Jul 10 11:11:40.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:11:41.038: INFO: namespace: e2e-tests-emptydir-56m4n, resource: bindings, ignored listing per whitelist Jul 10 11:11:41.063: INFO: namespace e2e-tests-emptydir-56m4n deletion completed in 9.153339896s • [SLOW TEST:16.219 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:11:41.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jul 10 11:11:41.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:11:51.582: INFO: stderr: "" Jul 10 11:11:51.582: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 10 11:11:51.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:11:51.677: INFO: stderr: "" Jul 10 11:11:51.677: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 Jul 10 11:11:56.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:11:56.776: INFO: stderr: "" Jul 10 11:11:56.776: INFO: stdout: "update-demo-nautilus-dc9cs update-demo-nautilus-sjzg5 " Jul 10 11:11:56.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dc9cs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:11:56.854: INFO: stderr: "" Jul 10 11:11:56.854: INFO: stdout: "" Jul 10 11:11:56.854: INFO: update-demo-nautilus-dc9cs is created but not running Jul 10 11:12:01.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:01.945: INFO: stderr: "" Jul 10 11:12:01.945: INFO: stdout: "update-demo-nautilus-dc9cs update-demo-nautilus-sjzg5 " Jul 10 11:12:01.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dc9cs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:02.079: INFO: stderr: "" Jul 10 11:12:02.079: INFO: stdout: "" Jul 10 11:12:02.079: INFO: update-demo-nautilus-dc9cs is created but not running Jul 10 11:12:07.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:07.172: INFO: stderr: "" Jul 10 11:12:07.172: INFO: stdout: "update-demo-nautilus-dc9cs update-demo-nautilus-sjzg5 " Jul 10 11:12:07.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dc9cs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:07.273: INFO: stderr: "" Jul 10 11:12:07.274: INFO: stdout: "true" Jul 10 11:12:07.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dc9cs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:07.359: INFO: stderr: "" Jul 10 11:12:07.359: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 10 11:12:07.359: INFO: validating pod update-demo-nautilus-dc9cs Jul 10 11:12:07.362: INFO: got data: { "image": "nautilus.jpg" } Jul 10 11:12:07.362: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 10 11:12:07.362: INFO: update-demo-nautilus-dc9cs is verified up and running Jul 10 11:12:07.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sjzg5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:07.462: INFO: stderr: "" Jul 10 11:12:07.462: INFO: stdout: "true" Jul 10 11:12:07.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sjzg5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:07.550: INFO: stderr: "" Jul 10 11:12:07.550: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 10 11:12:07.551: INFO: validating pod update-demo-nautilus-sjzg5 Jul 10 11:12:07.554: INFO: got data: { "image": "nautilus.jpg" } Jul 10 11:12:07.554: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 10 11:12:07.554: INFO: update-demo-nautilus-sjzg5 is verified up and running STEP: scaling down the replication controller Jul 10 11:12:07.556: INFO: scanned /root for discovery docs: Jul 10 11:12:07.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:08.701: INFO: stderr: "" Jul 10 11:12:08.701: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 10 11:12:08.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:08.800: INFO: stderr: "" Jul 10 11:12:08.801: INFO: stdout: "update-demo-nautilus-dc9cs update-demo-nautilus-sjzg5 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jul 10 11:12:13.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:14.287: INFO: stderr: "" Jul 10 11:12:14.288: INFO: stdout: "update-demo-nautilus-dc9cs update-demo-nautilus-sjzg5 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jul 10 11:12:19.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:19.386: INFO: stderr: "" Jul 10 11:12:19.386: INFO: stdout: "update-demo-nautilus-sjzg5 " Jul 10 11:12:19.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sjzg5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:19.481: INFO: stderr: "" Jul 10 11:12:19.481: INFO: stdout: "true" Jul 10 11:12:19.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sjzg5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:19.703: INFO: stderr: "" Jul 10 11:12:19.703: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 10 11:12:19.704: INFO: validating pod update-demo-nautilus-sjzg5 Jul 10 11:12:19.707: INFO: got data: { "image": "nautilus.jpg" } Jul 10 11:12:19.707: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 10 11:12:19.707: INFO: update-demo-nautilus-sjzg5 is verified up and running STEP: scaling up the replication controller Jul 10 11:12:19.709: INFO: scanned /root for discovery docs: Jul 10 11:12:19.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:20.984: INFO: stderr: "" Jul 10 11:12:20.984: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 10 11:12:20.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:21.123: INFO: stderr: "" Jul 10 11:12:21.123: INFO: stdout: "update-demo-nautilus-sjzg5 update-demo-nautilus-v5p95 " Jul 10 11:12:21.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sjzg5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:21.380: INFO: stderr: "" Jul 10 11:12:21.380: INFO: stdout: "true" Jul 10 11:12:21.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sjzg5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:21.477: INFO: stderr: "" Jul 10 11:12:21.477: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 10 11:12:21.477: INFO: validating pod update-demo-nautilus-sjzg5 Jul 10 11:12:21.525: INFO: got data: { "image": "nautilus.jpg" } Jul 10 11:12:21.525: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 10 11:12:21.525: INFO: update-demo-nautilus-sjzg5 is verified up and running Jul 10 11:12:21.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v5p95 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:21.629: INFO: stderr: "" Jul 10 11:12:21.629: INFO: stdout: "" Jul 10 11:12:21.629: INFO: update-demo-nautilus-v5p95 is created but not running Jul 10 11:12:26.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:26.837: INFO: stderr: "" Jul 10 11:12:26.837: INFO: stdout: "update-demo-nautilus-sjzg5 update-demo-nautilus-v5p95 " Jul 10 11:12:26.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sjzg5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:26.944: INFO: stderr: "" Jul 10 11:12:26.944: INFO: stdout: "true" Jul 10 11:12:26.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sjzg5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:27.046: INFO: stderr: "" Jul 10 11:12:27.046: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 10 11:12:27.046: INFO: validating pod update-demo-nautilus-sjzg5 Jul 10 11:12:27.049: INFO: got data: { "image": "nautilus.jpg" } Jul 10 11:12:27.049: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 10 11:12:27.049: INFO: update-demo-nautilus-sjzg5 is verified up and running Jul 10 11:12:27.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v5p95 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:27.140: INFO: stderr: "" Jul 10 11:12:27.140: INFO: stdout: "true" Jul 10 11:12:27.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v5p95 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:27.232: INFO: stderr: "" Jul 10 11:12:27.232: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 10 11:12:27.232: INFO: validating pod update-demo-nautilus-v5p95 Jul 10 11:12:27.236: INFO: got data: { "image": "nautilus.jpg" } Jul 10 11:12:27.236: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 10 11:12:27.236: INFO: update-demo-nautilus-v5p95 is verified up and running STEP: using delete to clean up resources Jul 10 11:12:27.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:27.515: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 10 11:12:27.516: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 10 11:12:27.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-kchv5' Jul 10 11:12:29.183: INFO: stderr: "No resources found.\n" Jul 10 11:12:29.183: INFO: stdout: "" Jul 10 11:12:29.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-kchv5 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 10 11:12:29.560: INFO: stderr: "" Jul 10 11:12:29.560: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:12:29.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-kchv5" for this suite. Jul 10 11:12:36.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:12:36.362: INFO: namespace: e2e-tests-kubectl-kchv5, resource: bindings, ignored listing per whitelist Jul 10 11:12:36.413: INFO: namespace e2e-tests-kubectl-kchv5 deletion completed in 6.574463596s • [SLOW TEST:55.350 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:12:36.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:12:36.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-qkpp9" for this suite. Jul 10 11:12:42.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:12:43.082: INFO: namespace: e2e-tests-kubelet-test-qkpp9, resource: bindings, ignored listing per whitelist Jul 10 11:12:43.210: INFO: namespace e2e-tests-kubelet-test-qkpp9 deletion completed in 6.652947644s • [SLOW TEST:6.797 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:12:43.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-gb497 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 10 11:12:43.455: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 10 11:13:19.804: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.59:8080/dial?request=hostName&protocol=http&host=10.244.1.66&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-gb497 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 10 11:13:19.804: INFO: >>> kubeConfig: /root/.kube/config I0710 11:13:19.838787 6 log.go:172] (0xc001dbe2c0) (0xc001f47180) Create stream I0710 11:13:19.838818 6 log.go:172] (0xc001dbe2c0) (0xc001f47180) Stream added, broadcasting: 1 I0710 11:13:19.841874 6 log.go:172] (0xc001dbe2c0) Reply frame received for 1 I0710 11:13:19.841927 6 log.go:172] (0xc001dbe2c0) (0xc001a821e0) Create stream I0710 11:13:19.841943 6 log.go:172] (0xc001dbe2c0) (0xc001a821e0) Stream added, broadcasting: 3 I0710 11:13:19.847298 6 log.go:172] (0xc001dbe2c0) Reply frame received for 3 I0710 11:13:19.847340 6 log.go:172] (0xc001dbe2c0) (0xc001f47220) Create stream I0710 11:13:19.847363 6 log.go:172] (0xc001dbe2c0) (0xc001f47220) Stream added, broadcasting: 5 I0710 11:13:19.848349 6 log.go:172] (0xc001dbe2c0) Reply frame received for 5 I0710 11:13:19.911029 6 log.go:172] (0xc001dbe2c0) Data frame received for 3 I0710 11:13:19.911061 6 log.go:172] (0xc001a821e0) (3) Data frame handling I0710 11:13:19.911081 6 log.go:172] (0xc001a821e0) (3) Data frame sent I0710 11:13:19.911714 6 log.go:172] (0xc001dbe2c0) Data frame received for 3 I0710 11:13:19.911753 6 log.go:172] (0xc001a821e0) (3) Data frame handling I0710 11:13:19.911986 6 log.go:172] (0xc001dbe2c0) Data frame received for 5 I0710 11:13:19.912029 6 log.go:172] (0xc001f47220) (5) Data frame handling I0710 11:13:19.913868 6 log.go:172] (0xc001dbe2c0) Data frame received for 1 I0710 11:13:19.913889 6 log.go:172] (0xc001f47180) (1) Data frame handling I0710 11:13:19.913903 6 log.go:172] (0xc001f47180) (1) Data frame sent I0710 11:13:19.913917 6 log.go:172] (0xc001dbe2c0) (0xc001f47180) Stream removed, broadcasting: 1 I0710 11:13:19.913993 6 log.go:172] (0xc001dbe2c0) (0xc001f47180) Stream removed, broadcasting: 1 I0710 11:13:19.914006 6 log.go:172] (0xc001dbe2c0) (0xc001a821e0) Stream removed, broadcasting: 3 I0710 11:13:19.914012 6 log.go:172] (0xc001dbe2c0) (0xc001f47220) Stream removed, broadcasting: 5 Jul 10 11:13:19.914: INFO: Waiting for endpoints: map[] I0710 11:13:19.914086 6 log.go:172] (0xc001dbe2c0) Go away received Jul 10 11:13:19.917: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.59:8080/dial?request=hostName&protocol=http&host=10.244.2.57&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-gb497 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 10 11:13:19.917: INFO: >>> kubeConfig: /root/.kube/config I0710 11:13:19.946006 6 log.go:172] (0xc000cd22c0) (0xc001a82640) Create stream I0710 11:13:19.946035 6 log.go:172] (0xc000cd22c0) (0xc001a82640) Stream added, broadcasting: 1 I0710 11:13:19.947821 6 log.go:172] (0xc000cd22c0) Reply frame received for 1 I0710 11:13:19.947852 6 log.go:172] (0xc000cd22c0) (0xc0020319a0) Create stream I0710 11:13:19.947863 6 log.go:172] (0xc000cd22c0) (0xc0020319a0) Stream added, broadcasting: 3 I0710 11:13:19.949222 6 log.go:172] (0xc000cd22c0) Reply frame received for 3 I0710 11:13:19.949258 6 log.go:172] (0xc000cd22c0) (0xc001a826e0) Create stream I0710 11:13:19.949268 6 log.go:172] (0xc000cd22c0) (0xc001a826e0) Stream added, broadcasting: 5 I0710 11:13:19.950049 6 log.go:172] (0xc000cd22c0) Reply frame received for 5 I0710 11:13:20.019027 6 log.go:172] (0xc000cd22c0) Data frame received for 3 I0710 11:13:20.019075 6 log.go:172] (0xc0020319a0) (3) Data frame handling I0710 11:13:20.019103 6 log.go:172] (0xc0020319a0) (3) Data frame sent I0710 11:13:20.019479 6 log.go:172] (0xc000cd22c0) Data frame received for 3 I0710 11:13:20.019508 6 log.go:172] (0xc0020319a0) (3) Data frame handling I0710 11:13:20.019551 6 log.go:172] (0xc000cd22c0) Data frame received for 5 I0710 11:13:20.019585 6 log.go:172] (0xc001a826e0) (5) Data frame handling I0710 11:13:20.021022 6 log.go:172] (0xc000cd22c0) Data frame received for 1 I0710 11:13:20.021043 6 log.go:172] (0xc001a82640) (1) Data frame handling I0710 11:13:20.021055 6 log.go:172] (0xc001a82640) (1) Data frame sent I0710 11:13:20.021077 6 log.go:172] (0xc000cd22c0) (0xc001a82640) Stream removed, broadcasting: 1 I0710 11:13:20.021135 6 log.go:172] (0xc000cd22c0) Go away received I0710 11:13:20.021194 6 log.go:172] (0xc000cd22c0) (0xc001a82640) Stream removed, broadcasting: 1 I0710 11:13:20.021258 6 log.go:172] (0xc000cd22c0) (0xc0020319a0) Stream removed, broadcasting: 3 I0710 11:13:20.021285 6 log.go:172] (0xc000cd22c0) (0xc001a826e0) Stream removed, broadcasting: 5 Jul 10 11:13:20.021: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:13:20.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-gb497" for this suite. Jul 10 11:13:42.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:13:42.314: INFO: namespace: e2e-tests-pod-network-test-gb497, resource: bindings, ignored listing per whitelist Jul 10 11:13:42.331: INFO: namespace e2e-tests-pod-network-test-gb497 deletion completed in 22.305989408s • [SLOW TEST:59.120 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:13:42.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: starting the proxy server Jul 10 11:13:42.419: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:13:42.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-bblw5" for this suite. Jul 10 11:13:48.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:13:48.603: INFO: namespace: e2e-tests-kubectl-bblw5, resource: bindings, ignored listing per whitelist Jul 10 11:13:48.618: INFO: namespace e2e-tests-kubectl-bblw5 deletion completed in 6.104120826s • [SLOW TEST:6.287 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:13:48.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token STEP: Creating a pod to test consume service account token Jul 10 11:13:49.201: INFO: Waiting up to 5m0s for pod "pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-zx6fr" in namespace "e2e-tests-svcaccounts-7xj8t" to be "success or failure" Jul 10 11:13:49.210: INFO: Pod "pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-zx6fr": Phase="Pending", Reason="", readiness=false. Elapsed: 9.443696ms Jul 10 11:13:51.214: INFO: Pod "pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-zx6fr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013456047s Jul 10 11:13:53.218: INFO: Pod "pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-zx6fr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017409302s Jul 10 11:13:55.307: INFO: Pod "pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-zx6fr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106441511s Jul 10 11:13:57.312: INFO: Pod "pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-zx6fr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.110888308s STEP: Saw pod success Jul 10 11:13:57.312: INFO: Pod "pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-zx6fr" satisfied condition "success or failure" Jul 10 11:13:57.315: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-zx6fr container token-test: STEP: delete the pod Jul 10 11:13:57.343: INFO: Waiting for pod pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-zx6fr to disappear Jul 10 11:13:57.372: INFO: Pod pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-zx6fr no longer exists STEP: Creating a pod to test consume service account root CA Jul 10 11:13:57.375: INFO: Waiting up to 5m0s for pod "pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-2fsvq" in namespace "e2e-tests-svcaccounts-7xj8t" to be "success or failure" Jul 10 11:13:57.378: INFO: Pod "pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-2fsvq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.414363ms Jul 10 11:13:59.382: INFO: Pod "pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-2fsvq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006408174s Jul 10 11:14:01.385: INFO: Pod "pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-2fsvq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010150047s Jul 10 11:14:03.551: INFO: Pod "pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-2fsvq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.176089121s Jul 10 11:14:05.556: INFO: Pod "pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-2fsvq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.180783425s STEP: Saw pod success Jul 10 11:14:05.556: INFO: Pod "pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-2fsvq" satisfied condition "success or failure" Jul 10 11:14:05.559: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-2fsvq container root-ca-test: STEP: delete the pod Jul 10 11:14:05.678: INFO: Waiting for pod pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-2fsvq to disappear Jul 10 11:14:05.695: INFO: Pod pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-2fsvq no longer exists STEP: Creating a pod to test consume service account namespace Jul 10 11:14:05.698: INFO: Waiting up to 5m0s for pod "pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-d8kjh" in namespace "e2e-tests-svcaccounts-7xj8t" to be "success or failure" Jul 10 11:14:05.702: INFO: Pod "pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-d8kjh": Phase="Pending", Reason="", readiness=false. Elapsed: 3.656579ms Jul 10 11:14:07.743: INFO: Pod "pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-d8kjh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045151376s Jul 10 11:14:09.747: INFO: Pod "pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-d8kjh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048491659s Jul 10 11:14:11.778: INFO: Pod "pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-d8kjh": Phase="Running", Reason="", readiness=false. Elapsed: 6.080230418s Jul 10 11:14:13.782: INFO: Pod "pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-d8kjh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083869987s STEP: Saw pod success Jul 10 11:14:13.782: INFO: Pod "pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-d8kjh" satisfied condition "success or failure" Jul 10 11:14:13.784: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-d8kjh container namespace-test: STEP: delete the pod Jul 10 11:14:13.841: INFO: Waiting for pod pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-d8kjh to disappear Jul 10 11:14:13.858: INFO: Pod pod-service-account-6ddb9d33-c29e-11ea-a406-0242ac11000f-d8kjh no longer exists [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:14:13.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-7xj8t" for this suite. Jul 10 11:14:20.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:14:20.051: INFO: namespace: e2e-tests-svcaccounts-7xj8t, resource: bindings, ignored listing per whitelist Jul 10 11:14:20.062: INFO: namespace e2e-tests-svcaccounts-7xj8t deletion completed in 6.201461718s • [SLOW TEST:31.444 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:14:20.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 10 11:14:20.183: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8052aeff-c29e-11ea-a406-0242ac11000f" in namespace "e2e-tests-downward-api-xjncd" to be "success or failure" Jul 10 11:14:20.187: INFO: Pod "downwardapi-volume-8052aeff-c29e-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.863466ms Jul 10 11:14:22.312: INFO: Pod "downwardapi-volume-8052aeff-c29e-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129157871s Jul 10 11:14:24.390: INFO: Pod "downwardapi-volume-8052aeff-c29e-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.206509302s Jul 10 11:14:26.393: INFO: Pod "downwardapi-volume-8052aeff-c29e-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.209916299s STEP: Saw pod success Jul 10 11:14:26.393: INFO: Pod "downwardapi-volume-8052aeff-c29e-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 11:14:26.396: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-8052aeff-c29e-11ea-a406-0242ac11000f container client-container: STEP: delete the pod Jul 10 11:14:26.434: INFO: Waiting for pod downwardapi-volume-8052aeff-c29e-11ea-a406-0242ac11000f to disappear Jul 10 11:14:26.438: INFO: Pod downwardapi-volume-8052aeff-c29e-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:14:26.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-xjncd" for this suite. Jul 10 11:14:32.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:14:32.563: INFO: namespace: e2e-tests-downward-api-xjncd, resource: bindings, ignored listing per whitelist Jul 10 11:14:32.569: INFO: namespace e2e-tests-downward-api-xjncd deletion completed in 6.127142066s • [SLOW TEST:12.506 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:14:32.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 10 11:14:32.781: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"87cb460e-c29e-11ea-b2c9-0242ac120008", Controller:(*bool)(0xc0017fac7e), BlockOwnerDeletion:(*bool)(0xc0017fac7f)}} Jul 10 11:14:32.870: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"87c3f838-c29e-11ea-b2c9-0242ac120008", Controller:(*bool)(0xc001f6f92e), BlockOwnerDeletion:(*bool)(0xc001f6f92f)}} Jul 10 11:14:32.901: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"87c46747-c29e-11ea-b2c9-0242ac120008", Controller:(*bool)(0xc001ff635e), BlockOwnerDeletion:(*bool)(0xc001ff635f)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:14:37.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-8f8vs" for this suite. Jul 10 11:14:43.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:14:44.021: INFO: namespace: e2e-tests-gc-8f8vs, resource: bindings, ignored listing per whitelist Jul 10 11:14:44.032: INFO: namespace e2e-tests-gc-8f8vs deletion completed in 6.085450495s • [SLOW TEST:11.463 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:14:44.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-8ea9bd02-c29e-11ea-a406-0242ac11000f STEP: Creating a pod to test consume configMaps Jul 10 11:14:44.309: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8eac0219-c29e-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-wx66k" to be "success or failure" Jul 10 11:14:44.319: INFO: Pod "pod-projected-configmaps-8eac0219-c29e-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.417733ms Jul 10 11:14:46.325: INFO: Pod "pod-projected-configmaps-8eac0219-c29e-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016422199s Jul 10 11:14:48.328: INFO: Pod "pod-projected-configmaps-8eac0219-c29e-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019566854s Jul 10 11:14:50.332: INFO: Pod "pod-projected-configmaps-8eac0219-c29e-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023520041s STEP: Saw pod success Jul 10 11:14:50.332: INFO: Pod "pod-projected-configmaps-8eac0219-c29e-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 11:14:50.335: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-8eac0219-c29e-11ea-a406-0242ac11000f container projected-configmap-volume-test: STEP: delete the pod Jul 10 11:14:50.357: INFO: Waiting for pod pod-projected-configmaps-8eac0219-c29e-11ea-a406-0242ac11000f to disappear Jul 10 11:14:50.411: INFO: Pod pod-projected-configmaps-8eac0219-c29e-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:14:50.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-wx66k" for this suite. Jul 10 11:14:56.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:14:56.501: INFO: namespace: e2e-tests-projected-wx66k, resource: bindings, ignored listing per whitelist Jul 10 11:14:56.526: INFO: namespace e2e-tests-projected-wx66k deletion completed in 6.110945891s • [SLOW TEST:12.494 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:14:56.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0710 11:15:07.139584 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 10 11:15:07.139: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:15:07.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-5spvn" for this suite. Jul 10 11:15:13.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:15:13.228: INFO: namespace: e2e-tests-gc-5spvn, resource: bindings, ignored listing per whitelist Jul 10 11:15:13.270: INFO: namespace e2e-tests-gc-5spvn deletion completed in 6.126793293s • [SLOW TEST:16.744 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:15:13.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-a00eff31-c29e-11ea-a406-0242ac11000f STEP: Creating secret with name s-test-opt-upd-a00effab-c29e-11ea-a406-0242ac11000f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-a00eff31-c29e-11ea-a406-0242ac11000f STEP: Updating secret s-test-opt-upd-a00effab-c29e-11ea-a406-0242ac11000f STEP: Creating secret with name s-test-opt-create-a00effc5-c29e-11ea-a406-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:15:25.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-b747c" for this suite. Jul 10 11:15:47.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:15:47.691: INFO: namespace: e2e-tests-projected-b747c, resource: bindings, ignored listing per whitelist Jul 10 11:15:47.730: INFO: namespace e2e-tests-projected-b747c deletion completed in 22.094712874s • [SLOW TEST:34.460 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:15:47.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating all guestbook components Jul 10 11:15:47.841: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jul 10 11:15:47.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nl7xz' Jul 10 11:15:48.250: INFO: stderr: "" Jul 10 11:15:48.250: INFO: stdout: "service/redis-slave created\n" Jul 10 11:15:48.250: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jul 10 11:15:48.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nl7xz' Jul 10 11:15:48.629: INFO: stderr: "" Jul 10 11:15:48.629: INFO: stdout: "service/redis-master created\n" Jul 10 11:15:48.629: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jul 10 11:15:48.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nl7xz' Jul 10 11:15:48.955: INFO: stderr: "" Jul 10 11:15:48.955: INFO: stdout: "service/frontend created\n" Jul 10 11:15:48.955: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jul 10 11:15:48.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nl7xz' Jul 10 11:15:49.234: INFO: stderr: "" Jul 10 11:15:49.234: INFO: stdout: "deployment.extensions/frontend created\n" Jul 10 11:15:49.234: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-master spec: replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jul 10 11:15:49.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nl7xz' Jul 10 11:15:49.543: INFO: stderr: "" Jul 10 11:15:49.544: INFO: stdout: "deployment.extensions/redis-master created\n" Jul 10 11:15:49.544: INFO: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jul 10 11:15:49.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nl7xz' Jul 10 11:15:50.879: INFO: stderr: "" Jul 10 11:15:50.879: INFO: stdout: "deployment.extensions/redis-slave created\n" STEP: validating guestbook app Jul 10 11:15:50.879: INFO: Waiting for all frontend pods to be Running. Jul 10 11:16:00.929: INFO: Waiting for frontend to serve content. Jul 10 11:16:01.374: INFO: Trying to add a new entry to the guestbook. Jul 10 11:16:02.447: INFO: Failed to get response from guestbook. err: , response:
Fatal error: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection refused [tcp://redis-master:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:155 Stack trace: #0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(128): Predis\Connection\AbstractConnection->onConnectionError('Connection refu...', 111) #1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(178): Predis\Connection\StreamConnection->createStreamSocket(Object(Predis\Connection\Parameters), 'tcp://redis-mas...', 4) #2 /usr/local/lib/php/Predis/Connection/StreamConnection.php(100): Predis\Connection\StreamConnection->tcpStreamInitializer(Object(Predis\Connection\Parameters)) #3 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(81): Predis\Connection\StreamConnection->createResource() #4 /usr/local/lib/php/Predis/Connection/StreamConnection.php(258): Predis\Connection\AbstractConnection->connect() #5 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(180): Predis\Connection\Strea in /usr/local/lib/php/Predis/Connection/AbstractConnection.php on line 155
Jul 10 11:16:07.460: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jul 10 11:16:07.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nl7xz' Jul 10 11:16:08.084: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 10 11:16:08.084: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jul 10 11:16:08.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nl7xz' Jul 10 11:16:08.522: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 10 11:16:08.522: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jul 10 11:16:08.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nl7xz' Jul 10 11:16:08.659: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 10 11:16:08.659: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 10 11:16:08.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nl7xz' Jul 10 11:16:08.769: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 10 11:16:08.769: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 10 11:16:08.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nl7xz' Jul 10 11:16:08.876: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 10 11:16:08.876: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jul 10 11:16:08.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nl7xz' Jul 10 11:16:09.233: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 10 11:16:09.233: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:16:09.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-nl7xz" for this suite. Jul 10 11:16:49.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:16:49.601: INFO: namespace: e2e-tests-kubectl-nl7xz, resource: bindings, ignored listing per whitelist Jul 10 11:16:49.619: INFO: namespace e2e-tests-kubectl-nl7xz deletion completed in 40.286328389s • [SLOW TEST:61.889 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:16:49.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:16:56.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-namespaces-tt4c5" for this suite. Jul 10 11:17:02.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:17:02.643: INFO: namespace: e2e-tests-namespaces-tt4c5, resource: bindings, ignored listing per whitelist Jul 10 11:17:02.660: INFO: namespace e2e-tests-namespaces-tt4c5 deletion completed in 6.10134464s STEP: Destroying namespace "e2e-tests-nsdeletetest-l88wh" for this suite. Jul 10 11:17:02.662: INFO: Namespace e2e-tests-nsdeletetest-l88wh was already deleted STEP: Destroying namespace "e2e-tests-nsdeletetest-qzrsc" for this suite. Jul 10 11:17:08.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:17:08.710: INFO: namespace: e2e-tests-nsdeletetest-qzrsc, resource: bindings, ignored listing per whitelist Jul 10 11:17:08.765: INFO: namespace e2e-tests-nsdeletetest-qzrsc deletion completed in 6.103409616s • [SLOW TEST:19.146 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:17:08.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jul 10 11:17:08.861: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:17:18.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-5j8b2" for this suite. Jul 10 11:17:26.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:17:26.142: INFO: namespace: e2e-tests-init-container-5j8b2, resource: bindings, ignored listing per whitelist Jul 10 11:17:26.199: INFO: namespace e2e-tests-init-container-5j8b2 deletion completed in 8.101477127s • [SLOW TEST:17.434 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:17:26.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0710 11:18:06.822041 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 10 11:18:06.822: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:18:06.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-2h96r" for this suite. Jul 10 11:18:23.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:18:23.817: INFO: namespace: e2e-tests-gc-2h96r, resource: bindings, ignored listing per whitelist Jul 10 11:18:23.836: INFO: namespace e2e-tests-gc-2h96r deletion completed in 17.010264376s • [SLOW TEST:57.636 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:18:23.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jul 10 11:18:33.209: INFO: Pod name wrapped-volume-race-170cbd8d-c29f-11ea-a406-0242ac11000f: Found 0 pods out of 5 Jul 10 11:18:38.450: INFO: Pod name wrapped-volume-race-170cbd8d-c29f-11ea-a406-0242ac11000f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-170cbd8d-c29f-11ea-a406-0242ac11000f in namespace e2e-tests-emptydir-wrapper-lvmz8, will wait for the garbage collector to delete the pods Jul 10 11:20:18.958: INFO: Deleting ReplicationController wrapped-volume-race-170cbd8d-c29f-11ea-a406-0242ac11000f took: 233.575321ms Jul 10 11:20:19.058: INFO: Terminating ReplicationController wrapped-volume-race-170cbd8d-c29f-11ea-a406-0242ac11000f pods took: 100.185212ms STEP: Creating RC which spawns configmap-volume pods Jul 10 11:20:58.710: INFO: Pod name wrapped-volume-race-6dc75557-c29f-11ea-a406-0242ac11000f: Found 0 pods out of 5 Jul 10 11:21:03.715: INFO: Pod name wrapped-volume-race-6dc75557-c29f-11ea-a406-0242ac11000f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6dc75557-c29f-11ea-a406-0242ac11000f in namespace e2e-tests-emptydir-wrapper-lvmz8, will wait for the garbage collector to delete the pods Jul 10 11:23:35.840: INFO: Deleting ReplicationController wrapped-volume-race-6dc75557-c29f-11ea-a406-0242ac11000f took: 5.804092ms Jul 10 11:23:36.441: INFO: Terminating ReplicationController wrapped-volume-race-6dc75557-c29f-11ea-a406-0242ac11000f pods took: 600.324897ms STEP: Creating RC which spawns configmap-volume pods Jul 10 11:24:17.815: INFO: Pod name wrapped-volume-race-e4802cab-c29f-11ea-a406-0242ac11000f: Found 0 pods out of 5 Jul 10 11:24:22.821: INFO: Pod name wrapped-volume-race-e4802cab-c29f-11ea-a406-0242ac11000f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e4802cab-c29f-11ea-a406-0242ac11000f in namespace e2e-tests-emptydir-wrapper-lvmz8, will wait for the garbage collector to delete the pods Jul 10 11:26:57.650: INFO: Deleting ReplicationController wrapped-volume-race-e4802cab-c29f-11ea-a406-0242ac11000f took: 9.008ms Jul 10 11:26:57.750: INFO: Terminating ReplicationController wrapped-volume-race-e4802cab-c29f-11ea-a406-0242ac11000f pods took: 100.252646ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:27:49.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-lvmz8" for this suite. Jul 10 11:27:57.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:27:57.803: INFO: namespace: e2e-tests-emptydir-wrapper-lvmz8, resource: bindings, ignored listing per whitelist Jul 10 11:27:57.806: INFO: namespace e2e-tests-emptydir-wrapper-lvmz8 deletion completed in 8.077989591s • [SLOW TEST:573.970 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:27:57.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 10 11:27:57.912: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jul 10 11:28:02.929: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 10 11:28:02.929: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jul 10 11:28:04.933: INFO: Creating deployment "test-rollover-deployment" Jul 10 11:28:04.961: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jul 10 11:28:06.967: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jul 10 11:28:06.973: INFO: Ensure that both replica sets have 1 created replica Jul 10 11:28:06.978: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jul 10 11:28:06.982: INFO: Updating deployment test-rollover-deployment Jul 10 11:28:06.982: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jul 10 11:28:09.290: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jul 10 11:28:09.438: INFO: Make sure deployment "test-rollover-deployment" is complete Jul 10 11:28:09.443: INFO: all replica sets need to contain the pod-template-hash label Jul 10 11:28:09.444: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977285, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977285, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977289, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977284, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 10 11:28:11.562: INFO: all replica sets need to contain the pod-template-hash label Jul 10 11:28:11.562: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977285, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977285, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977289, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977284, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 10 11:28:13.450: INFO: all replica sets need to contain the pod-template-hash label Jul 10 11:28:13.450: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977285, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977285, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977289, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977284, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 10 11:28:15.580: INFO: all replica sets need to contain the pod-template-hash label Jul 10 11:28:15.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977285, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977285, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977293, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977284, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 10 11:28:17.809: INFO: all replica sets need to contain the pod-template-hash label Jul 10 11:28:17.809: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977285, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977285, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977293, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977284, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 10 11:28:19.450: INFO: all replica sets need to contain the pod-template-hash label Jul 10 11:28:19.450: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977285, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977285, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977293, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977284, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 10 11:28:21.451: INFO: all replica sets need to contain the pod-template-hash label Jul 10 11:28:21.451: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977285, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977285, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977293, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977284, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 10 11:28:23.451: INFO: all replica sets need to contain the pod-template-hash label Jul 10 11:28:23.451: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977285, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977285, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977293, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977284, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 10 11:28:25.899: INFO: Jul 10 11:28:25.899: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jul 10 11:28:25.907: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-brwq4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-brwq4/deployments/test-rollover-deployment,UID:6beb19cf-c2a0-11ea-b2c9-0242ac120008,ResourceVersion:12805,Generation:2,CreationTimestamp:2020-07-10 11:28:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-07-10 11:28:05 +0000 UTC 2020-07-10 11:28:05 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-07-10 11:28:24 +0000 UTC 2020-07-10 11:28:04 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jul 10 11:28:25.931: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-brwq4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-brwq4/replicasets/test-rollover-deployment-5b8479fdb6,UID:6d23d0da-c2a0-11ea-b2c9-0242ac120008,ResourceVersion:12795,Generation:2,CreationTimestamp:2020-07-10 11:28:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 6beb19cf-c2a0-11ea-b2c9-0242ac120008 0xc001e99f17 0xc001e99f18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jul 10 11:28:25.931: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jul 10 11:28:25.931: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-brwq4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-brwq4/replicasets/test-rollover-controller,UID:67b7ec47-c2a0-11ea-b2c9-0242ac120008,ResourceVersion:12804,Generation:2,CreationTimestamp:2020-07-10 11:27:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 6beb19cf-c2a0-11ea-b2c9-0242ac120008 0xc001e99d87 0xc001e99d88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 10 11:28:25.932: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-brwq4,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-brwq4/replicasets/test-rollover-deployment-58494b7559,UID:6bf0911f-c2a0-11ea-b2c9-0242ac120008,ResourceVersion:12757,Generation:2,CreationTimestamp:2020-07-10 11:28:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 6beb19cf-c2a0-11ea-b2c9-0242ac120008 0xc001e99e47 0xc001e99e48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 10 11:28:25.934: INFO: Pod "test-rollover-deployment-5b8479fdb6-nwf9b" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-nwf9b,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-brwq4,SelfLink:/api/v1/namespaces/e2e-tests-deployment-brwq4/pods/test-rollover-deployment-5b8479fdb6-nwf9b,UID:6e0b2ec8-c2a0-11ea-b2c9-0242ac120008,ResourceVersion:12774,Generation:0,CreationTimestamp:2020-07-10 11:28:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 6d23d0da-c2a0-11ea-b2c9-0242ac120008 0xc001671c37 0xc001671c38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8cwxn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8cwxn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-8cwxn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001671d20} {node.kubernetes.io/unreachable Exists NoExecute 0xc001671d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:28:08 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:28:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:28:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 11:28:08 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.83,StartTime:2020-07-10 11:28:08 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-07-10 11:28:13 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://d84b036b2a27480d119dfa31da222afe32655c432de42049b8319abca7c581c7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:28:25.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-brwq4" for this suite. Jul 10 11:28:36.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:28:36.100: INFO: namespace: e2e-tests-deployment-brwq4, resource: bindings, ignored listing per whitelist Jul 10 11:28:36.112: INFO: namespace e2e-tests-deployment-brwq4 deletion completed in 10.175822138s • [SLOW TEST:38.307 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:28:36.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:28:42.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-7x6gh" for this suite. Jul 10 11:29:22.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:29:22.375: INFO: namespace: e2e-tests-kubelet-test-7x6gh, resource: bindings, ignored listing per whitelist Jul 10 11:29:22.390: INFO: namespace e2e-tests-kubelet-test-7x6gh deletion completed in 40.118228959s • [SLOW TEST:46.277 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:29:22.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 10 11:29:22.472: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a1f7b26-c2a0-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-949zq" to be "success or failure" Jul 10 11:29:22.475: INFO: Pod "downwardapi-volume-9a1f7b26-c2a0-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.273862ms Jul 10 11:29:24.602: INFO: Pod "downwardapi-volume-9a1f7b26-c2a0-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129934378s Jul 10 11:29:26.605: INFO: Pod "downwardapi-volume-9a1f7b26-c2a0-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132975815s Jul 10 11:29:28.609: INFO: Pod "downwardapi-volume-9a1f7b26-c2a0-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.137186501s STEP: Saw pod success Jul 10 11:29:28.609: INFO: Pod "downwardapi-volume-9a1f7b26-c2a0-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 11:29:28.612: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-9a1f7b26-c2a0-11ea-a406-0242ac11000f container client-container: STEP: delete the pod Jul 10 11:29:28.714: INFO: Waiting for pod downwardapi-volume-9a1f7b26-c2a0-11ea-a406-0242ac11000f to disappear Jul 10 11:29:28.717: INFO: Pod downwardapi-volume-9a1f7b26-c2a0-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:29:28.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-949zq" for this suite. Jul 10 11:29:34.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:29:34.747: INFO: namespace: e2e-tests-projected-949zq, resource: bindings, ignored listing per whitelist Jul 10 11:29:34.851: INFO: namespace e2e-tests-projected-949zq deletion completed in 6.130665568s • [SLOW TEST:12.461 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:29:34.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 10 11:29:34.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-lm6gl' Jul 10 11:29:37.404: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 10 11:29:37.404: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jul 10 11:29:37.420: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jul 10 11:29:37.462: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jul 10 11:29:37.508: INFO: scanned /root for discovery docs: Jul 10 11:29:37.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-lm6gl' Jul 10 11:29:54.243: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jul 10 11:29:54.243: INFO: stdout: "Created e2e-test-nginx-rc-b6eccbcf01289b22a624c9791f1d717a\nScaling up e2e-test-nginx-rc-b6eccbcf01289b22a624c9791f1d717a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b6eccbcf01289b22a624c9791f1d717a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b6eccbcf01289b22a624c9791f1d717a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jul 10 11:29:54.243: INFO: stdout: "Created e2e-test-nginx-rc-b6eccbcf01289b22a624c9791f1d717a\nScaling up e2e-test-nginx-rc-b6eccbcf01289b22a624c9791f1d717a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-b6eccbcf01289b22a624c9791f1d717a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-b6eccbcf01289b22a624c9791f1d717a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jul 10 11:29:54.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-lm6gl' Jul 10 11:29:54.340: INFO: stderr: "" Jul 10 11:29:54.340: INFO: stdout: "e2e-test-nginx-rc-b6eccbcf01289b22a624c9791f1d717a-8jn62 e2e-test-nginx-rc-q6fsn " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jul 10 11:29:59.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-lm6gl' Jul 10 11:29:59.441: INFO: stderr: "" Jul 10 11:29:59.441: INFO: stdout: "e2e-test-nginx-rc-b6eccbcf01289b22a624c9791f1d717a-8jn62 " Jul 10 11:29:59.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b6eccbcf01289b22a624c9791f1d717a-8jn62 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lm6gl' Jul 10 11:29:59.532: INFO: stderr: "" Jul 10 11:29:59.532: INFO: stdout: "true" Jul 10 11:29:59.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-b6eccbcf01289b22a624c9791f1d717a-8jn62 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lm6gl' Jul 10 11:29:59.618: INFO: stderr: "" Jul 10 11:29:59.618: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jul 10 11:29:59.618: INFO: e2e-test-nginx-rc-b6eccbcf01289b22a624c9791f1d717a-8jn62 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Jul 10 11:29:59.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-lm6gl' Jul 10 11:29:59.717: INFO: stderr: "" Jul 10 11:29:59.717: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:29:59.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-lm6gl" for this suite. Jul 10 11:30:09.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:30:09.745: INFO: namespace: e2e-tests-kubectl-lm6gl, resource: bindings, ignored listing per whitelist Jul 10 11:30:09.804: INFO: namespace e2e-tests-kubectl-lm6gl deletion completed in 10.082415798s • [SLOW TEST:34.953 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:30:09.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 10 11:30:10.090: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b6836488-c2a0-11ea-a406-0242ac11000f" in namespace "e2e-tests-downward-api-wcvll" to be "success or failure" Jul 10 11:30:10.108: INFO: Pod "downwardapi-volume-b6836488-c2a0-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.093185ms Jul 10 11:30:12.112: INFO: Pod "downwardapi-volume-b6836488-c2a0-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022766233s Jul 10 11:30:14.117: INFO: Pod "downwardapi-volume-b6836488-c2a0-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 4.026975731s Jul 10 11:30:16.121: INFO: Pod "downwardapi-volume-b6836488-c2a0-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030875943s STEP: Saw pod success Jul 10 11:30:16.121: INFO: Pod "downwardapi-volume-b6836488-c2a0-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 11:30:16.123: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-b6836488-c2a0-11ea-a406-0242ac11000f container client-container: STEP: delete the pod Jul 10 11:30:16.176: INFO: Waiting for pod downwardapi-volume-b6836488-c2a0-11ea-a406-0242ac11000f to disappear Jul 10 11:30:16.201: INFO: Pod downwardapi-volume-b6836488-c2a0-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:30:16.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-wcvll" for this suite. Jul 10 11:30:24.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:30:24.234: INFO: namespace: e2e-tests-downward-api-wcvll, resource: bindings, ignored listing per whitelist Jul 10 11:30:24.289: INFO: namespace e2e-tests-downward-api-wcvll deletion completed in 8.083243512s • [SLOW TEST:14.484 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:30:24.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Jul 10 11:30:24.696: INFO: Waiting up to 5m0s for pod "client-containers-bf383cbb-c2a0-11ea-a406-0242ac11000f" in namespace "e2e-tests-containers-5p222" to be "success or failure" Jul 10 11:30:24.782: INFO: Pod "client-containers-bf383cbb-c2a0-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 86.049435ms Jul 10 11:30:26.787: INFO: Pod "client-containers-bf383cbb-c2a0-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090429032s Jul 10 11:30:28.791: INFO: Pod "client-containers-bf383cbb-c2a0-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094837564s Jul 10 11:30:31.325: INFO: Pod "client-containers-bf383cbb-c2a0-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.629064271s Jul 10 11:30:33.396: INFO: Pod "client-containers-bf383cbb-c2a0-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.699883657s Jul 10 11:30:35.400: INFO: Pod "client-containers-bf383cbb-c2a0-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.703704058s STEP: Saw pod success Jul 10 11:30:35.400: INFO: Pod "client-containers-bf383cbb-c2a0-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 11:30:35.403: INFO: Trying to get logs from node hunter-worker2 pod client-containers-bf383cbb-c2a0-11ea-a406-0242ac11000f container test-container: STEP: delete the pod Jul 10 11:30:35.614: INFO: Waiting for pod client-containers-bf383cbb-c2a0-11ea-a406-0242ac11000f to disappear Jul 10 11:30:35.638: INFO: Pod client-containers-bf383cbb-c2a0-11ea-a406-0242ac11000f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:30:35.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-5p222" for this suite. Jul 10 11:30:41.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:30:41.917: INFO: namespace: e2e-tests-containers-5p222, resource: bindings, ignored listing per whitelist Jul 10 11:30:41.987: INFO: namespace e2e-tests-containers-5p222 deletion completed in 6.336680511s • [SLOW TEST:17.698 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:30:41.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name cm-test-opt-del-c997d6fc-c2a0-11ea-a406-0242ac11000f STEP: Creating configMap with name cm-test-opt-upd-c997d73f-c2a0-11ea-a406-0242ac11000f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-c997d6fc-c2a0-11ea-a406-0242ac11000f STEP: Updating configmap cm-test-opt-upd-c997d73f-c2a0-11ea-a406-0242ac11000f STEP: Creating configMap with name cm-test-opt-create-c997d75a-c2a0-11ea-a406-0242ac11000f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:30:52.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mn9n2" for this suite. Jul 10 11:31:18.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:31:18.284: INFO: namespace: e2e-tests-projected-mn9n2, resource: bindings, ignored listing per whitelist Jul 10 11:31:18.345: INFO: namespace e2e-tests-projected-mn9n2 deletion completed in 26.112394245s • [SLOW TEST:36.356 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:31:18.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 10 11:31:24.526: INFO: Waiting up to 5m0s for pod "client-envvars-e2e0de7f-c2a0-11ea-a406-0242ac11000f" in namespace "e2e-tests-pods-tnqbx" to be "success or failure" Jul 10 11:31:24.579: INFO: Pod "client-envvars-e2e0de7f-c2a0-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 53.452957ms Jul 10 11:31:29.277: INFO: Pod "client-envvars-e2e0de7f-c2a0-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.751493612s Jul 10 11:31:32.210: INFO: Pod "client-envvars-e2e0de7f-c2a0-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 7.684440994s Jul 10 11:31:34.214: INFO: Pod "client-envvars-e2e0de7f-c2a0-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.688086879s STEP: Saw pod success Jul 10 11:31:34.214: INFO: Pod "client-envvars-e2e0de7f-c2a0-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 11:31:34.216: INFO: Trying to get logs from node hunter-worker pod client-envvars-e2e0de7f-c2a0-11ea-a406-0242ac11000f container env3cont: STEP: delete the pod Jul 10 11:31:34.275: INFO: Waiting for pod client-envvars-e2e0de7f-c2a0-11ea-a406-0242ac11000f to disappear Jul 10 11:31:34.305: INFO: Pod client-envvars-e2e0de7f-c2a0-11ea-a406-0242ac11000f no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:31:34.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-tnqbx" for this suite. Jul 10 11:32:20.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:32:20.332: INFO: namespace: e2e-tests-pods-tnqbx, resource: bindings, ignored listing per whitelist Jul 10 11:32:20.421: INFO: namespace e2e-tests-pods-tnqbx deletion completed in 46.113792221s • [SLOW TEST:62.076 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:32:20.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 10 11:32:30.613: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 10 11:32:31.170: INFO: Pod pod-with-prestop-http-hook still exists Jul 10 11:32:33.170: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 10 11:32:34.392: INFO: Pod pod-with-prestop-http-hook still exists Jul 10 11:32:35.170: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 10 11:32:35.223: INFO: Pod pod-with-prestop-http-hook still exists Jul 10 11:32:37.170: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 10 11:32:37.173: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:32:37.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-wjmlg" for this suite. Jul 10 11:33:01.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:33:01.221: INFO: namespace: e2e-tests-container-lifecycle-hook-wjmlg, resource: bindings, ignored listing per whitelist Jul 10 11:33:01.269: INFO: namespace e2e-tests-container-lifecycle-hook-wjmlg deletion completed in 24.085791654s • [SLOW TEST:40.847 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:33:01.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jul 10 11:33:02.020: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:33:12.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-rtv5p" for this suite. Jul 10 11:33:38.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:33:38.298: INFO: namespace: e2e-tests-init-container-rtv5p, resource: bindings, ignored listing per whitelist Jul 10 11:33:38.319: INFO: namespace e2e-tests-init-container-rtv5p deletion completed in 26.142080723s • [SLOW TEST:37.050 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:33:38.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 10 11:34:02.948: INFO: Container started at 2020-07-10 11:33:44 +0000 UTC, pod became ready at 2020-07-10 11:34:01 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:34:02.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-bghsl" for this suite. Jul 10 11:34:27.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:34:27.518: INFO: namespace: e2e-tests-container-probe-bghsl, resource: bindings, ignored listing per whitelist Jul 10 11:34:27.557: INFO: namespace e2e-tests-container-probe-bghsl deletion completed in 24.60483443s • [SLOW TEST:49.238 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:34:27.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jul 10 11:34:28.082: INFO: PodSpec: initContainers in spec.initContainers Jul 10 11:35:30.787: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-504ad112-c2a1-11ea-a406-0242ac11000f", GenerateName:"", Namespace:"e2e-tests-init-container-6bc2l", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-6bc2l/pods/pod-init-504ad112-c2a1-11ea-a406-0242ac11000f", UID:"504b3e5f-c2a1-11ea-b2c9-0242ac120008", ResourceVersion:"14036", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729977668, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"82516410"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-6hqd8", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002248ac0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6hqd8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6hqd8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6hqd8", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0020d4c48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001d9a0c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0020d4cd0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0020d4dd0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0020d4dd8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0020d4ddc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977669, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977669, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977669, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729977668, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.4", PodIP:"10.244.2.96", StartTime:(*v1.Time)(0xc00206a200), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0003f0c40)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0003f0cb0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://06f37ab3b749b0c0db46d5ea33ae8f754ad33158879d00d85fa671d38cfb6771"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00206a240), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00206a220), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:35:30.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-6bc2l" for this suite. Jul 10 11:35:56.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:35:56.933: INFO: namespace: e2e-tests-init-container-6bc2l, resource: bindings, ignored listing per whitelist Jul 10 11:35:56.991: INFO: namespace e2e-tests-init-container-6bc2l deletion completed in 26.089017022s • [SLOW TEST:89.434 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:35:56.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-855a5fe5-c2a1-11ea-a406-0242ac11000f STEP: Creating a pod to test consume secrets Jul 10 11:35:57.183: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-85659115-c2a1-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-mmxc6" to be "success or failure" Jul 10 11:35:57.231: INFO: Pod "pod-projected-secrets-85659115-c2a1-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 47.281975ms Jul 10 11:35:59.357: INFO: Pod "pod-projected-secrets-85659115-c2a1-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173155039s Jul 10 11:36:01.399: INFO: Pod "pod-projected-secrets-85659115-c2a1-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.215516601s STEP: Saw pod success Jul 10 11:36:01.399: INFO: Pod "pod-projected-secrets-85659115-c2a1-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 11:36:01.401: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-85659115-c2a1-11ea-a406-0242ac11000f container projected-secret-volume-test: STEP: delete the pod Jul 10 11:36:01.483: INFO: Waiting for pod pod-projected-secrets-85659115-c2a1-11ea-a406-0242ac11000f to disappear Jul 10 11:36:01.506: INFO: Pod pod-projected-secrets-85659115-c2a1-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:36:01.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-mmxc6" for this suite. Jul 10 11:36:07.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:36:07.674: INFO: namespace: e2e-tests-projected-mmxc6, resource: bindings, ignored listing per whitelist Jul 10 11:36:07.676: INFO: namespace e2e-tests-projected-mmxc6 deletion completed in 6.166584615s • [SLOW TEST:10.685 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:36:07.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jul 10 11:36:07.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xpszf' Jul 10 11:36:08.036: INFO: stderr: "" Jul 10 11:36:08.036: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 10 11:36:08.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xpszf' Jul 10 11:36:08.174: INFO: stderr: "" Jul 10 11:36:08.175: INFO: stdout: "update-demo-nautilus-dlsgw update-demo-nautilus-p8cct " Jul 10 11:36:08.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dlsgw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xpszf' Jul 10 11:36:08.324: INFO: stderr: "" Jul 10 11:36:08.324: INFO: stdout: "" Jul 10 11:36:08.324: INFO: update-demo-nautilus-dlsgw is created but not running Jul 10 11:36:13.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xpszf' Jul 10 11:36:13.422: INFO: stderr: "" Jul 10 11:36:13.422: INFO: stdout: "update-demo-nautilus-dlsgw update-demo-nautilus-p8cct " Jul 10 11:36:13.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dlsgw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xpszf' Jul 10 11:36:13.521: INFO: stderr: "" Jul 10 11:36:13.521: INFO: stdout: "true" Jul 10 11:36:13.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dlsgw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xpszf' Jul 10 11:36:13.620: INFO: stderr: "" Jul 10 11:36:13.620: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 10 11:36:13.620: INFO: validating pod update-demo-nautilus-dlsgw Jul 10 11:36:13.624: INFO: got data: { "image": "nautilus.jpg" } Jul 10 11:36:13.624: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 10 11:36:13.624: INFO: update-demo-nautilus-dlsgw is verified up and running Jul 10 11:36:13.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p8cct -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xpszf' Jul 10 11:36:13.715: INFO: stderr: "" Jul 10 11:36:13.715: INFO: stdout: "true" Jul 10 11:36:13.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p8cct -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xpszf' Jul 10 11:36:13.802: INFO: stderr: "" Jul 10 11:36:13.802: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 10 11:36:13.802: INFO: validating pod update-demo-nautilus-p8cct Jul 10 11:36:13.805: INFO: got data: { "image": "nautilus.jpg" } Jul 10 11:36:13.805: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 10 11:36:13.805: INFO: update-demo-nautilus-p8cct is verified up and running STEP: using delete to clean up resources Jul 10 11:36:13.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xpszf' Jul 10 11:36:13.907: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 10 11:36:13.907: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 10 11:36:13.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-xpszf' Jul 10 11:36:14.010: INFO: stderr: "No resources found.\n" Jul 10 11:36:14.010: INFO: stdout: "" Jul 10 11:36:14.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-xpszf -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 10 11:36:14.108: INFO: stderr: "" Jul 10 11:36:14.108: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:36:14.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-xpszf" for this suite. Jul 10 11:36:36.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:36:36.422: INFO: namespace: e2e-tests-kubectl-xpszf, resource: bindings, ignored listing per whitelist Jul 10 11:36:36.491: INFO: namespace e2e-tests-kubectl-xpszf deletion completed in 22.379675341s • [SLOW TEST:28.815 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:36:36.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 10 11:36:44.643: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 10 11:36:44.645: INFO: Pod pod-with-poststart-exec-hook still exists Jul 10 11:36:46.645: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 10 11:36:46.649: INFO: Pod pod-with-poststart-exec-hook still exists Jul 10 11:36:48.645: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 10 11:36:48.648: INFO: Pod pod-with-poststart-exec-hook still exists Jul 10 11:36:50.645: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 10 11:36:50.662: INFO: Pod pod-with-poststart-exec-hook still exists Jul 10 11:36:52.645: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 10 11:36:52.649: INFO: Pod pod-with-poststart-exec-hook still exists Jul 10 11:36:54.645: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 10 11:36:54.649: INFO: Pod pod-with-poststart-exec-hook still exists Jul 10 11:36:56.645: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 10 11:36:56.649: INFO: Pod pod-with-poststart-exec-hook still exists Jul 10 11:36:58.645: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 10 11:36:59.424: INFO: Pod pod-with-poststart-exec-hook still exists Jul 10 11:37:00.645: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 10 11:37:00.698: INFO: Pod pod-with-poststart-exec-hook still exists Jul 10 11:37:02.645: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 10 11:37:02.649: INFO: Pod pod-with-poststart-exec-hook still exists Jul 10 11:37:04.645: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 10 11:37:04.649: INFO: Pod pod-with-poststart-exec-hook still exists Jul 10 11:37:06.645: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 10 11:37:06.649: INFO: Pod pod-with-poststart-exec-hook still exists Jul 10 11:37:08.645: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jul 10 11:37:08.649: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:37:08.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-z9fdk" for this suite. Jul 10 11:37:30.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:37:30.834: INFO: namespace: e2e-tests-container-lifecycle-hook-z9fdk, resource: bindings, ignored listing per whitelist Jul 10 11:37:30.843: INFO: namespace e2e-tests-container-lifecycle-hook-z9fdk deletion completed in 22.191431369s • [SLOW TEST:54.352 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:37:30.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod pod-subpath-test-configmap-5wsb STEP: Creating a pod to test atomic-volume-subpath Jul 10 11:37:30.989: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5wsb" in namespace "e2e-tests-subpath-7tjlf" to be "success or failure" Jul 10 11:37:30.993: INFO: Pod "pod-subpath-test-configmap-5wsb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.831127ms Jul 10 11:37:32.997: INFO: Pod "pod-subpath-test-configmap-5wsb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007729894s Jul 10 11:37:35.001: INFO: Pod "pod-subpath-test-configmap-5wsb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011769558s Jul 10 11:37:37.038: INFO: Pod "pod-subpath-test-configmap-5wsb": Phase="Running", Reason="", readiness=false. Elapsed: 6.048834745s Jul 10 11:37:39.042: INFO: Pod "pod-subpath-test-configmap-5wsb": Phase="Running", Reason="", readiness=false. Elapsed: 8.052903367s Jul 10 11:37:41.046: INFO: Pod "pod-subpath-test-configmap-5wsb": Phase="Running", Reason="", readiness=false. Elapsed: 10.05724935s Jul 10 11:37:43.050: INFO: Pod "pod-subpath-test-configmap-5wsb": Phase="Running", Reason="", readiness=false. Elapsed: 12.060788619s Jul 10 11:37:45.054: INFO: Pod "pod-subpath-test-configmap-5wsb": Phase="Running", Reason="", readiness=false. Elapsed: 14.064353529s Jul 10 11:37:47.058: INFO: Pod "pod-subpath-test-configmap-5wsb": Phase="Running", Reason="", readiness=false. Elapsed: 16.068491519s Jul 10 11:37:49.062: INFO: Pod "pod-subpath-test-configmap-5wsb": Phase="Running", Reason="", readiness=false. Elapsed: 18.072694605s Jul 10 11:37:51.066: INFO: Pod "pod-subpath-test-configmap-5wsb": Phase="Running", Reason="", readiness=false. Elapsed: 20.077130196s Jul 10 11:37:53.070: INFO: Pod "pod-subpath-test-configmap-5wsb": Phase="Running", Reason="", readiness=false. Elapsed: 22.08115624s Jul 10 11:37:55.075: INFO: Pod "pod-subpath-test-configmap-5wsb": Phase="Running", Reason="", readiness=false. Elapsed: 24.085766052s Jul 10 11:37:57.079: INFO: Pod "pod-subpath-test-configmap-5wsb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.089703219s STEP: Saw pod success Jul 10 11:37:57.079: INFO: Pod "pod-subpath-test-configmap-5wsb" satisfied condition "success or failure" Jul 10 11:37:57.082: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-5wsb container test-container-subpath-configmap-5wsb: STEP: delete the pod Jul 10 11:37:57.191: INFO: Waiting for pod pod-subpath-test-configmap-5wsb to disappear Jul 10 11:37:57.209: INFO: Pod pod-subpath-test-configmap-5wsb no longer exists STEP: Deleting pod pod-subpath-test-configmap-5wsb Jul 10 11:37:57.209: INFO: Deleting pod "pod-subpath-test-configmap-5wsb" in namespace "e2e-tests-subpath-7tjlf" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:37:57.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-subpath-7tjlf" for this suite. Jul 10 11:38:03.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:38:03.270: INFO: namespace: e2e-tests-subpath-7tjlf, resource: bindings, ignored listing per whitelist Jul 10 11:38:03.305: INFO: namespace e2e-tests-subpath-7tjlf deletion completed in 6.090893924s • [SLOW TEST:32.462 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:38:03.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-d09e8ba5-c2a1-11ea-a406-0242ac11000f STEP: Creating a pod to test consume secrets Jul 10 11:38:03.432: INFO: Waiting up to 5m0s for pod "pod-secrets-d0a4736e-c2a1-11ea-a406-0242ac11000f" in namespace "e2e-tests-secrets-vc66r" to be "success or failure" Jul 10 11:38:03.452: INFO: Pod "pod-secrets-d0a4736e-c2a1-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.642598ms Jul 10 11:38:05.455: INFO: Pod "pod-secrets-d0a4736e-c2a1-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022819883s Jul 10 11:38:07.543: INFO: Pod "pod-secrets-d0a4736e-c2a1-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11136596s STEP: Saw pod success Jul 10 11:38:07.543: INFO: Pod "pod-secrets-d0a4736e-c2a1-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 11:38:07.546: INFO: Trying to get logs from node hunter-worker pod pod-secrets-d0a4736e-c2a1-11ea-a406-0242ac11000f container secret-env-test: STEP: delete the pod Jul 10 11:38:07.563: INFO: Waiting for pod pod-secrets-d0a4736e-c2a1-11ea-a406-0242ac11000f to disappear Jul 10 11:38:07.568: INFO: Pod pod-secrets-d0a4736e-c2a1-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:38:07.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-vc66r" for this suite. Jul 10 11:38:15.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:38:15.633: INFO: namespace: e2e-tests-secrets-vc66r, resource: bindings, ignored listing per whitelist Jul 10 11:38:15.651: INFO: namespace e2e-tests-secrets-vc66r deletion completed in 8.080866145s • [SLOW TEST:12.346 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:38:15.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jul 10 11:38:15.891: INFO: Waiting up to 5m0s for pod "downward-api-d80f82be-c2a1-11ea-a406-0242ac11000f" in namespace "e2e-tests-downward-api-cd4kc" to be "success or failure" Jul 10 11:38:15.906: INFO: Pod "downward-api-d80f82be-c2a1-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.79268ms Jul 10 11:38:17.910: INFO: Pod "downward-api-d80f82be-c2a1-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01984194s Jul 10 11:38:20.183: INFO: Pod "downward-api-d80f82be-c2a1-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292277273s Jul 10 11:38:22.418: INFO: Pod "downward-api-d80f82be-c2a1-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.527759073s Jul 10 11:38:24.422: INFO: Pod "downward-api-d80f82be-c2a1-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.531269041s STEP: Saw pod success Jul 10 11:38:24.422: INFO: Pod "downward-api-d80f82be-c2a1-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 11:38:24.424: INFO: Trying to get logs from node hunter-worker2 pod downward-api-d80f82be-c2a1-11ea-a406-0242ac11000f container dapi-container: STEP: delete the pod Jul 10 11:38:24.447: INFO: Waiting for pod downward-api-d80f82be-c2a1-11ea-a406-0242ac11000f to disappear Jul 10 11:38:24.490: INFO: Pod downward-api-d80f82be-c2a1-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:38:24.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-cd4kc" for this suite. Jul 10 11:38:30.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:38:30.534: INFO: namespace: e2e-tests-downward-api-cd4kc, resource: bindings, ignored listing per whitelist Jul 10 11:38:30.563: INFO: namespace e2e-tests-downward-api-cd4kc deletion completed in 6.069715478s • [SLOW TEST:14.911 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:38:30.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 10 11:38:30.654: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e0df6a48-c2a1-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-vw69b" to be "success or failure" Jul 10 11:38:30.674: INFO: Pod "downwardapi-volume-e0df6a48-c2a1-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.313399ms Jul 10 11:38:34.256: INFO: Pod "downwardapi-volume-e0df6a48-c2a1-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.602579834s Jul 10 11:38:36.261: INFO: Pod "downwardapi-volume-e0df6a48-c2a1-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.60704517s Jul 10 11:38:38.439: INFO: Pod "downwardapi-volume-e0df6a48-c2a1-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 7.78531862s Jul 10 11:38:40.443: INFO: Pod "downwardapi-volume-e0df6a48-c2a1-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.78924874s STEP: Saw pod success Jul 10 11:38:40.443: INFO: Pod "downwardapi-volume-e0df6a48-c2a1-11ea-a406-0242ac11000f" satisfied condition "success or failure" Jul 10 11:38:40.446: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-e0df6a48-c2a1-11ea-a406-0242ac11000f container client-container: STEP: delete the pod Jul 10 11:38:40.546: INFO: Waiting for pod downwardapi-volume-e0df6a48-c2a1-11ea-a406-0242ac11000f to disappear Jul 10 11:38:40.551: INFO: Pod downwardapi-volume-e0df6a48-c2a1-11ea-a406-0242ac11000f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 10 11:38:40.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-vw69b" for this suite. Jul 10 11:38:46.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 10 11:38:46.720: INFO: namespace: e2e-tests-projected-vw69b, resource: bindings, ignored listing per whitelist Jul 10 11:38:46.767: INFO: namespace e2e-tests-projected-vw69b deletion completed in 6.214325038s • [SLOW TEST:16.204 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 10 11:38:46.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 10 11:38:47.247: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 10 11:38:56.570: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0494c46-c2a1-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-pdw6j" to be "success or failure"
Jul 10 11:38:56.596: INFO: Pod "downwardapi-volume-f0494c46-c2a1-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.863622ms
Jul 10 11:38:58.644: INFO: Pod "downwardapi-volume-f0494c46-c2a1-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074065119s
Jul 10 11:39:00.648: INFO: Pod "downwardapi-volume-f0494c46-c2a1-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077689295s
STEP: Saw pod success
Jul 10 11:39:00.648: INFO: Pod "downwardapi-volume-f0494c46-c2a1-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 11:39:00.650: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-f0494c46-c2a1-11ea-a406-0242ac11000f container client-container: 
STEP: delete the pod
Jul 10 11:39:00.683: INFO: Waiting for pod downwardapi-volume-f0494c46-c2a1-11ea-a406-0242ac11000f to disappear
Jul 10 11:39:00.697: INFO: Pod downwardapi-volume-f0494c46-c2a1-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:39:00.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pdw6j" for this suite.
Jul 10 11:39:06.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:39:06.744: INFO: namespace: e2e-tests-projected-pdw6j, resource: bindings, ignored listing per whitelist
Jul 10 11:39:06.855: INFO: namespace e2e-tests-projected-pdw6j deletion completed in 6.154773369s

• [SLOW TEST:11.294 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:39:06.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 10 11:39:08.923: INFO: Creating ReplicaSet my-hostname-basic-f7afc560-c2a1-11ea-a406-0242ac11000f
Jul 10 11:39:09.001: INFO: Pod name my-hostname-basic-f7afc560-c2a1-11ea-a406-0242ac11000f: Found 0 pods out of 1
Jul 10 11:39:15.549: INFO: Pod name my-hostname-basic-f7afc560-c2a1-11ea-a406-0242ac11000f: Found 1 pods out of 1
Jul 10 11:39:15.549: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-f7afc560-c2a1-11ea-a406-0242ac11000f" is running
Jul 10 11:39:19.561: INFO: Pod "my-hostname-basic-f7afc560-c2a1-11ea-a406-0242ac11000f-z9wtl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-10 11:39:09 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-10 11:39:09 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f7afc560-c2a1-11ea-a406-0242ac11000f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-10 11:39:09 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f7afc560-c2a1-11ea-a406-0242ac11000f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-10 11:39:09 +0000 UTC Reason: Message:}])
Jul 10 11:39:19.561: INFO: Trying to dial the pod
Jul 10 11:39:24.755: INFO: Controller my-hostname-basic-f7afc560-c2a1-11ea-a406-0242ac11000f: Got expected result from replica 1 [my-hostname-basic-f7afc560-c2a1-11ea-a406-0242ac11000f-z9wtl]: "my-hostname-basic-f7afc560-c2a1-11ea-a406-0242ac11000f-z9wtl", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:39:24.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-w77jr" for this suite.
Jul 10 11:39:30.843: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:39:31.093: INFO: namespace: e2e-tests-replicaset-w77jr, resource: bindings, ignored listing per whitelist
Jul 10 11:39:31.101: INFO: namespace e2e-tests-replicaset-w77jr deletion completed in 6.342292654s

• [SLOW TEST:24.246 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:39:31.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jul 10 11:39:38.075: INFO: Successfully updated pod "annotationupdate05046621-c2a2-11ea-a406-0242ac11000f"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:39:42.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2wrqf" for this suite.
Jul 10 11:40:04.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:40:04.228: INFO: namespace: e2e-tests-projected-2wrqf, resource: bindings, ignored listing per whitelist
Jul 10 11:40:04.282: INFO: namespace e2e-tests-projected-2wrqf deletion completed in 22.107850965s

• [SLOW TEST:33.181 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:40:04.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jul 10 11:40:04.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jul 10 11:40:06.832: INFO: stderr: ""
Jul 10 11:40:06.832: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45709\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45709/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:40:06.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qz946" for this suite.
Jul 10 11:40:12.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:40:12.898: INFO: namespace: e2e-tests-kubectl-qz946, resource: bindings, ignored listing per whitelist
Jul 10 11:40:12.941: INFO: namespace e2e-tests-kubectl-qz946 deletion completed in 6.104914648s

• [SLOW TEST:8.658 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:40:12.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jul 10 11:40:21.411: INFO: 0 pods remaining
Jul 10 11:40:21.411: INFO: 0 pods has nil DeletionTimestamp
Jul 10 11:40:21.411: INFO: 
Jul 10 11:40:21.691: INFO: 0 pods remaining
Jul 10 11:40:21.691: INFO: 0 pods has nil DeletionTimestamp
Jul 10 11:40:21.691: INFO: 
STEP: Gathering metrics
W0710 11:40:22.836165       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 10 11:40:22.836: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:40:22.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-cbqfb" for this suite.
Jul 10 11:40:32.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:40:33.013: INFO: namespace: e2e-tests-gc-cbqfb, resource: bindings, ignored listing per whitelist
Jul 10 11:40:33.042: INFO: namespace e2e-tests-gc-cbqfb deletion completed in 10.20314727s

• [SLOW TEST:20.101 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:40:33.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-29ecfbd7-c2a2-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume configMaps
Jul 10 11:40:33.227: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-29ed6b15-c2a2-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-c6tf8" to be "success or failure"
Jul 10 11:40:33.231: INFO: Pod "pod-projected-configmaps-29ed6b15-c2a2-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.306186ms
Jul 10 11:40:35.605: INFO: Pod "pod-projected-configmaps-29ed6b15-c2a2-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.377755905s
Jul 10 11:40:37.647: INFO: Pod "pod-projected-configmaps-29ed6b15-c2a2-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.419562772s
Jul 10 11:40:39.815: INFO: Pod "pod-projected-configmaps-29ed6b15-c2a2-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.587494957s
Jul 10 11:40:41.831: INFO: Pod "pod-projected-configmaps-29ed6b15-c2a2-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.603768121s
STEP: Saw pod success
Jul 10 11:40:41.831: INFO: Pod "pod-projected-configmaps-29ed6b15-c2a2-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 11:40:41.833: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-29ed6b15-c2a2-11ea-a406-0242ac11000f container projected-configmap-volume-test: 
STEP: delete the pod
Jul 10 11:40:41.977: INFO: Waiting for pod pod-projected-configmaps-29ed6b15-c2a2-11ea-a406-0242ac11000f to disappear
Jul 10 11:40:42.011: INFO: Pod pod-projected-configmaps-29ed6b15-c2a2-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:40:42.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-c6tf8" for this suite.
Jul 10 11:40:48.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:40:48.063: INFO: namespace: e2e-tests-projected-c6tf8, resource: bindings, ignored listing per whitelist
Jul 10 11:40:48.123: INFO: namespace e2e-tests-projected-c6tf8 deletion completed in 6.109943777s

• [SLOW TEST:15.081 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:40:48.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 10 11:40:48.995: INFO: Waiting up to 5m0s for pod "downwardapi-volume-33517ce6-c2a2-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-v994w" to be "success or failure"
Jul 10 11:40:49.286: INFO: Pod "downwardapi-volume-33517ce6-c2a2-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 290.196576ms
Jul 10 11:40:51.290: INFO: Pod "downwardapi-volume-33517ce6-c2a2-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294198036s
Jul 10 11:40:53.294: INFO: Pod "downwardapi-volume-33517ce6-c2a2-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29813356s
Jul 10 11:40:55.297: INFO: Pod "downwardapi-volume-33517ce6-c2a2-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.301342565s
STEP: Saw pod success
Jul 10 11:40:55.297: INFO: Pod "downwardapi-volume-33517ce6-c2a2-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 11:40:55.299: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-33517ce6-c2a2-11ea-a406-0242ac11000f container client-container: 
STEP: delete the pod
Jul 10 11:40:55.608: INFO: Waiting for pod downwardapi-volume-33517ce6-c2a2-11ea-a406-0242ac11000f to disappear
Jul 10 11:40:55.620: INFO: Pod downwardapi-volume-33517ce6-c2a2-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:40:55.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v994w" for this suite.
Jul 10 11:41:01.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:41:01.671: INFO: namespace: e2e-tests-projected-v994w, resource: bindings, ignored listing per whitelist
Jul 10 11:41:01.723: INFO: namespace e2e-tests-projected-v994w deletion completed in 6.100364236s

• [SLOW TEST:13.599 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:41:01.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jul 10 11:41:02.010: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix830821632/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:41:02.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jbzz7" for this suite.
Jul 10 11:41:08.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:41:08.326: INFO: namespace: e2e-tests-kubectl-jbzz7, resource: bindings, ignored listing per whitelist
Jul 10 11:41:08.342: INFO: namespace e2e-tests-kubectl-jbzz7 deletion completed in 6.252038872s

• [SLOW TEST:6.619 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:41:08.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-3f0e6177-c2a2-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume secrets
Jul 10 11:41:08.690: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3f0eff08-c2a2-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-w7qcx" to be "success or failure"
Jul 10 11:41:08.720: INFO: Pod "pod-projected-secrets-3f0eff08-c2a2-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.090119ms
Jul 10 11:41:10.724: INFO: Pod "pod-projected-secrets-3f0eff08-c2a2-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033657505s
Jul 10 11:41:12.728: INFO: Pod "pod-projected-secrets-3f0eff08-c2a2-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037624774s
Jul 10 11:41:14.731: INFO: Pod "pod-projected-secrets-3f0eff08-c2a2-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041279512s
STEP: Saw pod success
Jul 10 11:41:14.731: INFO: Pod "pod-projected-secrets-3f0eff08-c2a2-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 11:41:14.734: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-3f0eff08-c2a2-11ea-a406-0242ac11000f container projected-secret-volume-test: 
STEP: delete the pod
Jul 10 11:41:14.774: INFO: Waiting for pod pod-projected-secrets-3f0eff08-c2a2-11ea-a406-0242ac11000f to disappear
Jul 10 11:41:14.784: INFO: Pod pod-projected-secrets-3f0eff08-c2a2-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:41:14.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-w7qcx" for this suite.
Jul 10 11:41:20.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:41:20.968: INFO: namespace: e2e-tests-projected-w7qcx, resource: bindings, ignored listing per whitelist
Jul 10 11:41:21.012: INFO: namespace e2e-tests-projected-w7qcx deletion completed in 6.225834041s

• [SLOW TEST:12.670 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:41:21.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 10 11:41:21.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:41:25.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-9s2m6" for this suite.
Jul 10 11:42:11.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:42:12.000: INFO: namespace: e2e-tests-pods-9s2m6, resource: bindings, ignored listing per whitelist
Jul 10 11:42:12.021: INFO: namespace e2e-tests-pods-9s2m6 deletion completed in 46.090812374s

• [SLOW TEST:51.009 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:42:12.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 10 11:42:12.242: INFO: Waiting up to 5m0s for pod "pod-64f20920-c2a2-11ea-a406-0242ac11000f" in namespace "e2e-tests-emptydir-4kp6q" to be "success or failure"
Jul 10 11:42:12.313: INFO: Pod "pod-64f20920-c2a2-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 70.841121ms
Jul 10 11:42:14.316: INFO: Pod "pod-64f20920-c2a2-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074528884s
Jul 10 11:42:16.368: INFO: Pod "pod-64f20920-c2a2-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 4.126022691s
Jul 10 11:42:18.372: INFO: Pod "pod-64f20920-c2a2-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.129714677s
STEP: Saw pod success
Jul 10 11:42:18.372: INFO: Pod "pod-64f20920-c2a2-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 11:42:18.374: INFO: Trying to get logs from node hunter-worker pod pod-64f20920-c2a2-11ea-a406-0242ac11000f container test-container: 
STEP: delete the pod
Jul 10 11:42:18.414: INFO: Waiting for pod pod-64f20920-c2a2-11ea-a406-0242ac11000f to disappear
Jul 10 11:42:18.516: INFO: Pod pod-64f20920-c2a2-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:42:18.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-4kp6q" for this suite.
Jul 10 11:42:24.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:42:24.619: INFO: namespace: e2e-tests-emptydir-4kp6q, resource: bindings, ignored listing per whitelist
Jul 10 11:42:24.748: INFO: namespace e2e-tests-emptydir-4kp6q deletion completed in 6.227662928s

• [SLOW TEST:12.727 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:42:24.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jul 10 11:42:24.892: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fs2gh,SelfLink:/api/v1/namespaces/e2e-tests-watch-fs2gh/configmaps/e2e-watch-test-watch-closed,UID:6c761abf-c2a2-11ea-b2c9-0242ac120008,ResourceVersion:15407,Generation:0,CreationTimestamp:2020-07-10 11:42:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 10 11:42:24.893: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fs2gh,SelfLink:/api/v1/namespaces/e2e-tests-watch-fs2gh/configmaps/e2e-watch-test-watch-closed,UID:6c761abf-c2a2-11ea-b2c9-0242ac120008,ResourceVersion:15408,Generation:0,CreationTimestamp:2020-07-10 11:42:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jul 10 11:42:24.929: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fs2gh,SelfLink:/api/v1/namespaces/e2e-tests-watch-fs2gh/configmaps/e2e-watch-test-watch-closed,UID:6c761abf-c2a2-11ea-b2c9-0242ac120008,ResourceVersion:15409,Generation:0,CreationTimestamp:2020-07-10 11:42:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 10 11:42:24.929: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-fs2gh,SelfLink:/api/v1/namespaces/e2e-tests-watch-fs2gh/configmaps/e2e-watch-test-watch-closed,UID:6c761abf-c2a2-11ea-b2c9-0242ac120008,ResourceVersion:15410,Generation:0,CreationTimestamp:2020-07-10 11:42:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:42:24.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-fs2gh" for this suite.
Jul 10 11:42:32.995: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:42:33.019: INFO: namespace: e2e-tests-watch-fs2gh, resource: bindings, ignored listing per whitelist
Jul 10 11:42:33.055: INFO: namespace e2e-tests-watch-fs2gh deletion completed in 8.115027857s

• [SLOW TEST:8.306 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:42:33.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 10 11:42:33.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-gvzn5'
Jul 10 11:42:33.430: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 10 11:42:33.431: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jul 10 11:42:38.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-gvzn5'
Jul 10 11:42:39.023: INFO: stderr: ""
Jul 10 11:42:39.023: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:42:39.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-gvzn5" for this suite.
Jul 10 11:43:01.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:43:01.472: INFO: namespace: e2e-tests-kubectl-gvzn5, resource: bindings, ignored listing per whitelist
Jul 10 11:43:01.503: INFO: namespace e2e-tests-kubectl-gvzn5 deletion completed in 22.278665143s

• [SLOW TEST:28.447 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:43:01.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:43:01.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-pm7bj" for this suite.
Jul 10 11:43:08.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:43:08.197: INFO: namespace: e2e-tests-services-pm7bj, resource: bindings, ignored listing per whitelist
Jul 10 11:43:08.253: INFO: namespace e2e-tests-services-pm7bj deletion completed in 6.638836222s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.750 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:43:08.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jul 10 11:43:08.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-77c7m run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jul 10 11:43:14.205: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0710 11:43:14.142902    2470 log.go:172] (0xc00014a630) (0xc00040e640) Create stream\nI0710 11:43:14.142953    2470 log.go:172] (0xc00014a630) (0xc00040e640) Stream added, broadcasting: 1\nI0710 11:43:14.145466    2470 log.go:172] (0xc00014a630) Reply frame received for 1\nI0710 11:43:14.145505    2470 log.go:172] (0xc00014a630) (0xc0006f5900) Create stream\nI0710 11:43:14.145514    2470 log.go:172] (0xc00014a630) (0xc0006f5900) Stream added, broadcasting: 3\nI0710 11:43:14.146537    2470 log.go:172] (0xc00014a630) Reply frame received for 3\nI0710 11:43:14.146575    2470 log.go:172] (0xc00014a630) (0xc00040e6e0) Create stream\nI0710 11:43:14.146587    2470 log.go:172] (0xc00014a630) (0xc00040e6e0) Stream added, broadcasting: 5\nI0710 11:43:14.147891    2470 log.go:172] (0xc00014a630) Reply frame received for 5\nI0710 11:43:14.147924    2470 log.go:172] (0xc00014a630) (0xc0007b0320) Create stream\nI0710 11:43:14.147931    2470 log.go:172] (0xc00014a630) (0xc0007b0320) Stream added, broadcasting: 7\nI0710 11:43:14.148623    2470 log.go:172] (0xc00014a630) Reply frame received for 7\nI0710 11:43:14.148868    2470 log.go:172] (0xc0006f5900) (3) Writing data frame\nI0710 11:43:14.148966    2470 log.go:172] (0xc0006f5900) (3) Writing data frame\nI0710 11:43:14.149646    2470 log.go:172] (0xc00014a630) Data frame received for 5\nI0710 11:43:14.149660    2470 log.go:172] (0xc00040e6e0) (5) Data frame handling\nI0710 11:43:14.149677    2470 log.go:172] (0xc00040e6e0) (5) Data frame sent\nI0710 11:43:14.150152    2470 log.go:172] (0xc00014a630) Data frame received for 5\nI0710 11:43:14.150173    2470 log.go:172] (0xc00040e6e0) (5) Data frame handling\nI0710 11:43:14.150192    2470 log.go:172] (0xc00040e6e0) (5) Data frame sent\nI0710 11:43:14.184410    2470 log.go:172] (0xc00014a630) Data frame received for 5\nI0710 11:43:14.184432    2470 log.go:172] (0xc00040e6e0) (5) Data frame handling\nI0710 11:43:14.184459    2470 log.go:172] (0xc00014a630) Data frame received for 7\nI0710 11:43:14.184472    2470 log.go:172] (0xc0007b0320) (7) Data frame handling\nI0710 11:43:14.184715    2470 log.go:172] (0xc00014a630) Data frame received for 1\nI0710 11:43:14.184786    2470 log.go:172] (0xc00040e640) (1) Data frame handling\nI0710 11:43:14.184799    2470 log.go:172] (0xc00040e640) (1) Data frame sent\nI0710 11:43:14.184893    2470 log.go:172] (0xc00014a630) (0xc00040e640) Stream removed, broadcasting: 1\nI0710 11:43:14.184951    2470 log.go:172] (0xc00014a630) (0xc0006f5900) Stream removed, broadcasting: 3\nI0710 11:43:14.185007    2470 log.go:172] (0xc00014a630) Go away received\nI0710 11:43:14.185030    2470 log.go:172] (0xc00014a630) (0xc00040e640) Stream removed, broadcasting: 1\nI0710 11:43:14.185055    2470 log.go:172] (0xc00014a630) (0xc0006f5900) Stream removed, broadcasting: 3\nI0710 11:43:14.185065    2470 log.go:172] (0xc00014a630) (0xc00040e6e0) Stream removed, broadcasting: 5\nI0710 11:43:14.185072    2470 log.go:172] (0xc00014a630) (0xc0007b0320) Stream removed, broadcasting: 7\n"
Jul 10 11:43:14.205: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:43:16.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-77c7m" for this suite.
Jul 10 11:43:30.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:43:30.342: INFO: namespace: e2e-tests-kubectl-77c7m, resource: bindings, ignored listing per whitelist
Jul 10 11:43:30.403: INFO: namespace e2e-tests-kubectl-77c7m deletion completed in 14.188807058s

• [SLOW TEST:22.150 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:43:30.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-9399f632-c2a2-11ea-a406-0242ac11000f
STEP: Creating configMap with name cm-test-opt-upd-9399f690-c2a2-11ea-a406-0242ac11000f
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-9399f632-c2a2-11ea-a406-0242ac11000f
STEP: Updating configmap cm-test-opt-upd-9399f690-c2a2-11ea-a406-0242ac11000f
STEP: Creating configMap with name cm-test-opt-create-9399f6b2-c2a2-11ea-a406-0242ac11000f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:43:42.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-w7gz7" for this suite.
Jul 10 11:44:04.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:44:04.991: INFO: namespace: e2e-tests-configmap-w7gz7, resource: bindings, ignored listing per whitelist
Jul 10 11:44:05.005: INFO: namespace e2e-tests-configmap-w7gz7 deletion completed in 22.09444643s

• [SLOW TEST:34.601 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:44:05.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-a83830f0-c2a2-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume secrets
Jul 10 11:44:05.118: INFO: Waiting up to 5m0s for pod "pod-secrets-a83a478b-c2a2-11ea-a406-0242ac11000f" in namespace "e2e-tests-secrets-cwxk4" to be "success or failure"
Jul 10 11:44:05.140: INFO: Pod "pod-secrets-a83a478b-c2a2-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.204266ms
Jul 10 11:44:07.143: INFO: Pod "pod-secrets-a83a478b-c2a2-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025354677s
Jul 10 11:44:09.148: INFO: Pod "pod-secrets-a83a478b-c2a2-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 4.029942694s
Jul 10 11:44:11.152: INFO: Pod "pod-secrets-a83a478b-c2a2-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034411783s
STEP: Saw pod success
Jul 10 11:44:11.152: INFO: Pod "pod-secrets-a83a478b-c2a2-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 11:44:11.156: INFO: Trying to get logs from node hunter-worker pod pod-secrets-a83a478b-c2a2-11ea-a406-0242ac11000f container secret-volume-test: 
STEP: delete the pod
Jul 10 11:44:11.197: INFO: Waiting for pod pod-secrets-a83a478b-c2a2-11ea-a406-0242ac11000f to disappear
Jul 10 11:44:11.217: INFO: Pod pod-secrets-a83a478b-c2a2-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:44:11.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-cwxk4" for this suite.
Jul 10 11:44:17.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:44:17.317: INFO: namespace: e2e-tests-secrets-cwxk4, resource: bindings, ignored listing per whitelist
Jul 10 11:44:17.350: INFO: namespace e2e-tests-secrets-cwxk4 deletion completed in 6.08911105s

• [SLOW TEST:12.345 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:44:17.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-2rl6h.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-2rl6h.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-2rl6h.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-2rl6h.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-2rl6h.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-2rl6h.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 10 11:44:47.039: INFO: DNS probes using e2e-tests-dns-2rl6h/dns-test-af916552-c2a2-11ea-a406-0242ac11000f succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:44:47.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-2rl6h" for this suite.
Jul 10 11:44:56.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:44:56.518: INFO: namespace: e2e-tests-dns-2rl6h, resource: bindings, ignored listing per whitelist
Jul 10 11:44:56.534: INFO: namespace e2e-tests-dns-2rl6h deletion completed in 8.77018137s

• [SLOW TEST:39.183 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:44:56.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 10 11:44:57.770: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c73aebb7-c2a2-11ea-a406-0242ac11000f" in namespace "e2e-tests-downward-api-dpjmc" to be "success or failure"
Jul 10 11:44:58.064: INFO: Pod "downwardapi-volume-c73aebb7-c2a2-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 293.138784ms
Jul 10 11:45:00.068: INFO: Pod "downwardapi-volume-c73aebb7-c2a2-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297514602s
Jul 10 11:45:02.071: INFO: Pod "downwardapi-volume-c73aebb7-c2a2-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.300976283s
Jul 10 11:45:04.075: INFO: Pod "downwardapi-volume-c73aebb7-c2a2-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.304740226s
STEP: Saw pod success
Jul 10 11:45:04.075: INFO: Pod "downwardapi-volume-c73aebb7-c2a2-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 11:45:04.078: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-c73aebb7-c2a2-11ea-a406-0242ac11000f container client-container: 
STEP: delete the pod
Jul 10 11:45:04.165: INFO: Waiting for pod downwardapi-volume-c73aebb7-c2a2-11ea-a406-0242ac11000f to disappear
Jul 10 11:45:04.176: INFO: Pod downwardapi-volume-c73aebb7-c2a2-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:45:04.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-dpjmc" for this suite.
Jul 10 11:45:12.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:45:12.240: INFO: namespace: e2e-tests-downward-api-dpjmc, resource: bindings, ignored listing per whitelist
Jul 10 11:45:12.324: INFO: namespace e2e-tests-downward-api-dpjmc deletion completed in 8.144463725s

• [SLOW TEST:15.790 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:45:12.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jul 10 11:45:12.483: INFO: Waiting up to 5m0s for pod "downward-api-d061f078-c2a2-11ea-a406-0242ac11000f" in namespace "e2e-tests-downward-api-gwbfn" to be "success or failure"
Jul 10 11:45:12.488: INFO: Pod "downward-api-d061f078-c2a2-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.916047ms
Jul 10 11:45:14.492: INFO: Pod "downward-api-d061f078-c2a2-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009274203s
Jul 10 11:45:16.495: INFO: Pod "downward-api-d061f078-c2a2-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 4.012191555s
Jul 10 11:45:18.499: INFO: Pod "downward-api-d061f078-c2a2-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016120419s
STEP: Saw pod success
Jul 10 11:45:18.499: INFO: Pod "downward-api-d061f078-c2a2-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 11:45:18.502: INFO: Trying to get logs from node hunter-worker2 pod downward-api-d061f078-c2a2-11ea-a406-0242ac11000f container dapi-container: 
STEP: delete the pod
Jul 10 11:45:18.541: INFO: Waiting for pod downward-api-d061f078-c2a2-11ea-a406-0242ac11000f to disappear
Jul 10 11:45:18.560: INFO: Pod downward-api-d061f078-c2a2-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:45:18.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-gwbfn" for this suite.
Jul 10 11:45:24.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:45:24.840: INFO: namespace: e2e-tests-downward-api-gwbfn, resource: bindings, ignored listing per whitelist
Jul 10 11:45:24.855: INFO: namespace e2e-tests-downward-api-gwbfn deletion completed in 6.291882381s

• [SLOW TEST:12.532 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:45:24.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-4fr7h
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-4fr7h
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-4fr7h
Jul 10 11:45:25.387: INFO: Found 0 stateful pods, waiting for 1
Jul 10 11:45:35.391: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jul 10 11:45:35.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4fr7h ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 10 11:45:36.197: INFO: stderr: "I0710 11:45:35.526330    2494 log.go:172] (0xc00083a2c0) (0xc00075a640) Create stream\nI0710 11:45:35.526381    2494 log.go:172] (0xc00083a2c0) (0xc00075a640) Stream added, broadcasting: 1\nI0710 11:45:35.528702    2494 log.go:172] (0xc00083a2c0) Reply frame received for 1\nI0710 11:45:35.528735    2494 log.go:172] (0xc00083a2c0) (0xc0005c6be0) Create stream\nI0710 11:45:35.528744    2494 log.go:172] (0xc00083a2c0) (0xc0005c6be0) Stream added, broadcasting: 3\nI0710 11:45:35.529755    2494 log.go:172] (0xc00083a2c0) Reply frame received for 3\nI0710 11:45:35.529791    2494 log.go:172] (0xc00083a2c0) (0xc0002dc000) Create stream\nI0710 11:45:35.529802    2494 log.go:172] (0xc00083a2c0) (0xc0002dc000) Stream added, broadcasting: 5\nI0710 11:45:35.530590    2494 log.go:172] (0xc00083a2c0) Reply frame received for 5\nI0710 11:45:36.191013    2494 log.go:172] (0xc00083a2c0) Data frame received for 3\nI0710 11:45:36.191051    2494 log.go:172] (0xc0005c6be0) (3) Data frame handling\nI0710 11:45:36.191065    2494 log.go:172] (0xc0005c6be0) (3) Data frame sent\nI0710 11:45:36.191075    2494 log.go:172] (0xc00083a2c0) Data frame received for 3\nI0710 11:45:36.191086    2494 log.go:172] (0xc0005c6be0) (3) Data frame handling\nI0710 11:45:36.191136    2494 log.go:172] (0xc00083a2c0) Data frame received for 5\nI0710 11:45:36.191178    2494 log.go:172] (0xc0002dc000) (5) Data frame handling\nI0710 11:45:36.193225    2494 log.go:172] (0xc00083a2c0) Data frame received for 1\nI0710 11:45:36.193248    2494 log.go:172] (0xc00075a640) (1) Data frame handling\nI0710 11:45:36.193270    2494 log.go:172] (0xc00075a640) (1) Data frame sent\nI0710 11:45:36.193287    2494 log.go:172] (0xc00083a2c0) (0xc00075a640) Stream removed, broadcasting: 1\nI0710 11:45:36.193373    2494 log.go:172] (0xc00083a2c0) Go away received\nI0710 11:45:36.193527    2494 log.go:172] (0xc00083a2c0) (0xc00075a640) Stream removed, broadcasting: 1\nI0710 11:45:36.193556    2494 log.go:172] (0xc00083a2c0) (0xc0005c6be0) Stream removed, broadcasting: 3\nI0710 11:45:36.193570    2494 log.go:172] (0xc00083a2c0) (0xc0002dc000) Stream removed, broadcasting: 5\n"
Jul 10 11:45:36.197: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 10 11:45:36.197: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 10 11:45:36.909: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul 10 11:45:46.914: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 10 11:45:46.914: INFO: Waiting for statefulset status.replicas updated to 0
Jul 10 11:45:46.932: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999442s
Jul 10 11:45:47.937: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990714634s
Jul 10 11:45:48.941: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.985772383s
Jul 10 11:45:49.946: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.981466291s
Jul 10 11:45:50.950: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.976889985s
Jul 10 11:45:51.955: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.972870041s
Jul 10 11:45:52.959: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.968230586s
Jul 10 11:45:53.963: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.964126032s
Jul 10 11:45:55.215: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.960034076s
Jul 10 11:45:56.418: INFO: Verifying statefulset ss doesn't scale past 1 for another 707.920189ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-4fr7h
Jul 10 11:45:57.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4fr7h ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 10 11:45:57.610: INFO: stderr: "I0710 11:45:57.552332    2517 log.go:172] (0xc000154840) (0xc000607220) Create stream\nI0710 11:45:57.552395    2517 log.go:172] (0xc000154840) (0xc000607220) Stream added, broadcasting: 1\nI0710 11:45:57.554519    2517 log.go:172] (0xc000154840) Reply frame received for 1\nI0710 11:45:57.554574    2517 log.go:172] (0xc000154840) (0xc00074c000) Create stream\nI0710 11:45:57.554601    2517 log.go:172] (0xc000154840) (0xc00074c000) Stream added, broadcasting: 3\nI0710 11:45:57.555321    2517 log.go:172] (0xc000154840) Reply frame received for 3\nI0710 11:45:57.555349    2517 log.go:172] (0xc000154840) (0xc0006072c0) Create stream\nI0710 11:45:57.555359    2517 log.go:172] (0xc000154840) (0xc0006072c0) Stream added, broadcasting: 5\nI0710 11:45:57.556057    2517 log.go:172] (0xc000154840) Reply frame received for 5\nI0710 11:45:57.605101    2517 log.go:172] (0xc000154840) Data frame received for 5\nI0710 11:45:57.605170    2517 log.go:172] (0xc0006072c0) (5) Data frame handling\nI0710 11:45:57.605218    2517 log.go:172] (0xc000154840) Data frame received for 3\nI0710 11:45:57.605249    2517 log.go:172] (0xc00074c000) (3) Data frame handling\nI0710 11:45:57.605288    2517 log.go:172] (0xc00074c000) (3) Data frame sent\nI0710 11:45:57.605309    2517 log.go:172] (0xc000154840) Data frame received for 3\nI0710 11:45:57.605331    2517 log.go:172] (0xc00074c000) (3) Data frame handling\nI0710 11:45:57.606795    2517 log.go:172] (0xc000154840) Data frame received for 1\nI0710 11:45:57.606822    2517 log.go:172] (0xc000607220) (1) Data frame handling\nI0710 11:45:57.606834    2517 log.go:172] (0xc000607220) (1) Data frame sent\nI0710 11:45:57.606856    2517 log.go:172] (0xc000154840) (0xc000607220) Stream removed, broadcasting: 1\nI0710 11:45:57.606890    2517 log.go:172] (0xc000154840) Go away received\nI0710 11:45:57.607058    2517 log.go:172] (0xc000154840) (0xc000607220) Stream removed, broadcasting: 1\nI0710 11:45:57.607082    2517 log.go:172] (0xc000154840) (0xc00074c000) Stream removed, broadcasting: 3\nI0710 11:45:57.607096    2517 log.go:172] (0xc000154840) (0xc0006072c0) Stream removed, broadcasting: 5\n"
Jul 10 11:45:57.610: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 10 11:45:57.610: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 10 11:45:57.614: INFO: Found 1 stateful pods, waiting for 3
Jul 10 11:46:07.619: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 10 11:46:07.619: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 10 11:46:07.619: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jul 10 11:46:07.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4fr7h ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 10 11:46:07.814: INFO: stderr: "I0710 11:46:07.750390    2540 log.go:172] (0xc00088e2c0) (0xc0005f94a0) Create stream\nI0710 11:46:07.750438    2540 log.go:172] (0xc00088e2c0) (0xc0005f94a0) Stream added, broadcasting: 1\nI0710 11:46:07.752451    2540 log.go:172] (0xc00088e2c0) Reply frame received for 1\nI0710 11:46:07.752496    2540 log.go:172] (0xc00088e2c0) (0xc000654000) Create stream\nI0710 11:46:07.752508    2540 log.go:172] (0xc00088e2c0) (0xc000654000) Stream added, broadcasting: 3\nI0710 11:46:07.753252    2540 log.go:172] (0xc00088e2c0) Reply frame received for 3\nI0710 11:46:07.753281    2540 log.go:172] (0xc00088e2c0) (0xc0006540a0) Create stream\nI0710 11:46:07.753291    2540 log.go:172] (0xc00088e2c0) (0xc0006540a0) Stream added, broadcasting: 5\nI0710 11:46:07.753938    2540 log.go:172] (0xc00088e2c0) Reply frame received for 5\nI0710 11:46:07.808165    2540 log.go:172] (0xc00088e2c0) Data frame received for 3\nI0710 11:46:07.808194    2540 log.go:172] (0xc000654000) (3) Data frame handling\nI0710 11:46:07.808209    2540 log.go:172] (0xc000654000) (3) Data frame sent\nI0710 11:46:07.808214    2540 log.go:172] (0xc00088e2c0) Data frame received for 3\nI0710 11:46:07.808219    2540 log.go:172] (0xc000654000) (3) Data frame handling\nI0710 11:46:07.808249    2540 log.go:172] (0xc00088e2c0) Data frame received for 5\nI0710 11:46:07.808254    2540 log.go:172] (0xc0006540a0) (5) Data frame handling\nI0710 11:46:07.811719    2540 log.go:172] (0xc00088e2c0) Data frame received for 1\nI0710 11:46:07.811742    2540 log.go:172] (0xc0005f94a0) (1) Data frame handling\nI0710 11:46:07.811754    2540 log.go:172] (0xc0005f94a0) (1) Data frame sent\nI0710 11:46:07.811765    2540 log.go:172] (0xc00088e2c0) (0xc0005f94a0) Stream removed, broadcasting: 1\nI0710 11:46:07.811786    2540 log.go:172] (0xc00088e2c0) Go away received\nI0710 11:46:07.812019    2540 log.go:172] (0xc00088e2c0) (0xc0005f94a0) Stream removed, broadcasting: 1\nI0710 11:46:07.812036    2540 log.go:172] (0xc00088e2c0) (0xc000654000) Stream removed, broadcasting: 3\nI0710 11:46:07.812045    2540 log.go:172] (0xc00088e2c0) (0xc0006540a0) Stream removed, broadcasting: 5\n"
Jul 10 11:46:07.814: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 10 11:46:07.814: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 10 11:46:07.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4fr7h ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 10 11:46:08.051: INFO: stderr: "I0710 11:46:07.941931    2563 log.go:172] (0xc000138580) (0xc0006966e0) Create stream\nI0710 11:46:07.942003    2563 log.go:172] (0xc000138580) (0xc0006966e0) Stream added, broadcasting: 1\nI0710 11:46:07.944969    2563 log.go:172] (0xc000138580) Reply frame received for 1\nI0710 11:46:07.945012    2563 log.go:172] (0xc000138580) (0xc000520aa0) Create stream\nI0710 11:46:07.945027    2563 log.go:172] (0xc000138580) (0xc000520aa0) Stream added, broadcasting: 3\nI0710 11:46:07.946049    2563 log.go:172] (0xc000138580) Reply frame received for 3\nI0710 11:46:07.946090    2563 log.go:172] (0xc000138580) (0xc000696780) Create stream\nI0710 11:46:07.946104    2563 log.go:172] (0xc000138580) (0xc000696780) Stream added, broadcasting: 5\nI0710 11:46:07.946963    2563 log.go:172] (0xc000138580) Reply frame received for 5\nI0710 11:46:08.045011    2563 log.go:172] (0xc000138580) Data frame received for 3\nI0710 11:46:08.045083    2563 log.go:172] (0xc000520aa0) (3) Data frame handling\nI0710 11:46:08.045135    2563 log.go:172] (0xc000138580) Data frame received for 5\nI0710 11:46:08.045171    2563 log.go:172] (0xc000696780) (5) Data frame handling\nI0710 11:46:08.045207    2563 log.go:172] (0xc000520aa0) (3) Data frame sent\nI0710 11:46:08.045232    2563 log.go:172] (0xc000138580) Data frame received for 3\nI0710 11:46:08.045249    2563 log.go:172] (0xc000520aa0) (3) Data frame handling\nI0710 11:46:08.047164    2563 log.go:172] (0xc000138580) Data frame received for 1\nI0710 11:46:08.047206    2563 log.go:172] (0xc0006966e0) (1) Data frame handling\nI0710 11:46:08.047228    2563 log.go:172] (0xc0006966e0) (1) Data frame sent\nI0710 11:46:08.047254    2563 log.go:172] (0xc000138580) (0xc0006966e0) Stream removed, broadcasting: 1\nI0710 11:46:08.047509    2563 log.go:172] (0xc000138580) Go away received\nI0710 11:46:08.047559    2563 log.go:172] (0xc000138580) (0xc0006966e0) Stream removed, broadcasting: 1\nI0710 11:46:08.047613    2563 log.go:172] (0xc000138580) (0xc000520aa0) Stream removed, broadcasting: 3\nI0710 11:46:08.047640    2563 log.go:172] (0xc000138580) (0xc000696780) Stream removed, broadcasting: 5\n"
Jul 10 11:46:08.051: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 10 11:46:08.051: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 10 11:46:08.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4fr7h ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 10 11:46:08.342: INFO: stderr: "I0710 11:46:08.170791    2586 log.go:172] (0xc0006f24d0) (0xc00071e640) Create stream\nI0710 11:46:08.170845    2586 log.go:172] (0xc0006f24d0) (0xc00071e640) Stream added, broadcasting: 1\nI0710 11:46:08.173662    2586 log.go:172] (0xc0006f24d0) Reply frame received for 1\nI0710 11:46:08.173703    2586 log.go:172] (0xc0006f24d0) (0xc00038ec80) Create stream\nI0710 11:46:08.173715    2586 log.go:172] (0xc0006f24d0) (0xc00038ec80) Stream added, broadcasting: 3\nI0710 11:46:08.176106    2586 log.go:172] (0xc0006f24d0) Reply frame received for 3\nI0710 11:46:08.176140    2586 log.go:172] (0xc0006f24d0) (0xc00071e6e0) Create stream\nI0710 11:46:08.176153    2586 log.go:172] (0xc0006f24d0) (0xc00071e6e0) Stream added, broadcasting: 5\nI0710 11:46:08.176885    2586 log.go:172] (0xc0006f24d0) Reply frame received for 5\nI0710 11:46:08.333319    2586 log.go:172] (0xc0006f24d0) Data frame received for 5\nI0710 11:46:08.333387    2586 log.go:172] (0xc00071e6e0) (5) Data frame handling\nI0710 11:46:08.333431    2586 log.go:172] (0xc0006f24d0) Data frame received for 3\nI0710 11:46:08.333473    2586 log.go:172] (0xc00038ec80) (3) Data frame handling\nI0710 11:46:08.333501    2586 log.go:172] (0xc00038ec80) (3) Data frame sent\nI0710 11:46:08.333521    2586 log.go:172] (0xc0006f24d0) Data frame received for 3\nI0710 11:46:08.333570    2586 log.go:172] (0xc00038ec80) (3) Data frame handling\nI0710 11:46:08.337259    2586 log.go:172] (0xc0006f24d0) Data frame received for 1\nI0710 11:46:08.337292    2586 log.go:172] (0xc00071e640) (1) Data frame handling\nI0710 11:46:08.337309    2586 log.go:172] (0xc00071e640) (1) Data frame sent\nI0710 11:46:08.337328    2586 log.go:172] (0xc0006f24d0) (0xc00071e640) Stream removed, broadcasting: 1\nI0710 11:46:08.337503    2586 log.go:172] (0xc0006f24d0) Go away received\nI0710 11:46:08.337641    2586 log.go:172] (0xc0006f24d0) (0xc00071e640) Stream removed, broadcasting: 1\nI0710 11:46:08.337679    2586 log.go:172] (0xc0006f24d0) (0xc00038ec80) Stream removed, broadcasting: 3\nI0710 11:46:08.337697    2586 log.go:172] (0xc0006f24d0) (0xc00071e6e0) Stream removed, broadcasting: 5\n"
Jul 10 11:46:08.342: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 10 11:46:08.342: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 10 11:46:08.342: INFO: Waiting for statefulset status.replicas updated to 0
Jul 10 11:46:08.345: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jul 10 11:46:18.351: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 10 11:46:18.351: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul 10 11:46:18.351: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul 10 11:46:18.382: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999576s
Jul 10 11:46:19.386: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.973560718s
Jul 10 11:46:20.418: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.969091243s
Jul 10 11:46:21.421: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.937775611s
Jul 10 11:46:22.425: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.934544888s
Jul 10 11:46:23.428: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.930498493s
Jul 10 11:46:24.433: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.927261665s
Jul 10 11:46:25.436: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.922910814s
Jul 10 11:46:26.439: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.919872495s
Jul 10 11:46:27.443: INFO: Verifying statefulset ss doesn't scale past 3 for another 916.358117ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-4fr7h
Jul 10 11:46:28.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4fr7h ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 10 11:46:28.638: INFO: stderr: "I0710 11:46:28.567946    2608 log.go:172] (0xc000724370) (0xc000764640) Create stream\nI0710 11:46:28.568018    2608 log.go:172] (0xc000724370) (0xc000764640) Stream added, broadcasting: 1\nI0710 11:46:28.570004    2608 log.go:172] (0xc000724370) Reply frame received for 1\nI0710 11:46:28.570055    2608 log.go:172] (0xc000724370) (0xc0005c4d20) Create stream\nI0710 11:46:28.570070    2608 log.go:172] (0xc000724370) (0xc0005c4d20) Stream added, broadcasting: 3\nI0710 11:46:28.570832    2608 log.go:172] (0xc000724370) Reply frame received for 3\nI0710 11:46:28.570863    2608 log.go:172] (0xc000724370) (0xc0002fe000) Create stream\nI0710 11:46:28.570877    2608 log.go:172] (0xc000724370) (0xc0002fe000) Stream added, broadcasting: 5\nI0710 11:46:28.571576    2608 log.go:172] (0xc000724370) Reply frame received for 5\nI0710 11:46:28.634306    2608 log.go:172] (0xc000724370) Data frame received for 3\nI0710 11:46:28.634340    2608 log.go:172] (0xc0005c4d20) (3) Data frame handling\nI0710 11:46:28.634353    2608 log.go:172] (0xc0005c4d20) (3) Data frame sent\nI0710 11:46:28.634364    2608 log.go:172] (0xc000724370) Data frame received for 3\nI0710 11:46:28.634378    2608 log.go:172] (0xc0005c4d20) (3) Data frame handling\nI0710 11:46:28.634389    2608 log.go:172] (0xc000724370) Data frame received for 5\nI0710 11:46:28.634398    2608 log.go:172] (0xc0002fe000) (5) Data frame handling\nI0710 11:46:28.635609    2608 log.go:172] (0xc000724370) Data frame received for 1\nI0710 11:46:28.635644    2608 log.go:172] (0xc000764640) (1) Data frame handling\nI0710 11:46:28.635661    2608 log.go:172] (0xc000764640) (1) Data frame sent\nI0710 11:46:28.635672    2608 log.go:172] (0xc000724370) (0xc000764640) Stream removed, broadcasting: 1\nI0710 11:46:28.635694    2608 log.go:172] (0xc000724370) Go away received\nI0710 11:46:28.635943    2608 log.go:172] (0xc000724370) (0xc000764640) Stream removed, broadcasting: 1\nI0710 11:46:28.635971    2608 log.go:172] (0xc000724370) (0xc0005c4d20) Stream removed, broadcasting: 3\nI0710 11:46:28.635988    2608 log.go:172] (0xc000724370) (0xc0002fe000) Stream removed, broadcasting: 5\n"
Jul 10 11:46:28.638: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 10 11:46:28.638: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 10 11:46:28.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4fr7h ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 10 11:46:28.855: INFO: stderr: "I0710 11:46:28.783073    2631 log.go:172] (0xc0007e2210) (0xc000706640) Create stream\nI0710 11:46:28.783150    2631 log.go:172] (0xc0007e2210) (0xc000706640) Stream added, broadcasting: 1\nI0710 11:46:28.785506    2631 log.go:172] (0xc0007e2210) Reply frame received for 1\nI0710 11:46:28.785544    2631 log.go:172] (0xc0007e2210) (0xc0007066e0) Create stream\nI0710 11:46:28.785554    2631 log.go:172] (0xc0007e2210) (0xc0007066e0) Stream added, broadcasting: 3\nI0710 11:46:28.786319    2631 log.go:172] (0xc0007e2210) Reply frame received for 3\nI0710 11:46:28.786361    2631 log.go:172] (0xc0007e2210) (0xc0005bebe0) Create stream\nI0710 11:46:28.786382    2631 log.go:172] (0xc0007e2210) (0xc0005bebe0) Stream added, broadcasting: 5\nI0710 11:46:28.787027    2631 log.go:172] (0xc0007e2210) Reply frame received for 5\nI0710 11:46:28.851474    2631 log.go:172] (0xc0007e2210) Data frame received for 5\nI0710 11:46:28.851503    2631 log.go:172] (0xc0005bebe0) (5) Data frame handling\nI0710 11:46:28.851529    2631 log.go:172] (0xc0007e2210) Data frame received for 3\nI0710 11:46:28.851538    2631 log.go:172] (0xc0007066e0) (3) Data frame handling\nI0710 11:46:28.851544    2631 log.go:172] (0xc0007066e0) (3) Data frame sent\nI0710 11:46:28.851550    2631 log.go:172] (0xc0007e2210) Data frame received for 3\nI0710 11:46:28.851554    2631 log.go:172] (0xc0007066e0) (3) Data frame handling\nI0710 11:46:28.853094    2631 log.go:172] (0xc0007e2210) Data frame received for 1\nI0710 11:46:28.853112    2631 log.go:172] (0xc000706640) (1) Data frame handling\nI0710 11:46:28.853141    2631 log.go:172] (0xc000706640) (1) Data frame sent\nI0710 11:46:28.853164    2631 log.go:172] (0xc0007e2210) (0xc000706640) Stream removed, broadcasting: 1\nI0710 11:46:28.853177    2631 log.go:172] (0xc0007e2210) Go away received\nI0710 11:46:28.853350    2631 log.go:172] (0xc0007e2210) (0xc000706640) Stream removed, broadcasting: 1\nI0710 11:46:28.853362    2631 log.go:172] (0xc0007e2210) (0xc0007066e0) Stream removed, broadcasting: 3\nI0710 11:46:28.853367    2631 log.go:172] (0xc0007e2210) (0xc0005bebe0) Stream removed, broadcasting: 5\n"
Jul 10 11:46:28.855: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 10 11:46:28.855: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 10 11:46:28.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-4fr7h ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 10 11:46:29.109: INFO: stderr: "I0710 11:46:29.039602    2653 log.go:172] (0xc000138790) (0xc0005c7400) Create stream\nI0710 11:46:29.039669    2653 log.go:172] (0xc000138790) (0xc0005c7400) Stream added, broadcasting: 1\nI0710 11:46:29.041969    2653 log.go:172] (0xc000138790) Reply frame received for 1\nI0710 11:46:29.042037    2653 log.go:172] (0xc000138790) (0xc0006ba460) Create stream\nI0710 11:46:29.042054    2653 log.go:172] (0xc000138790) (0xc0006ba460) Stream added, broadcasting: 3\nI0710 11:46:29.042821    2653 log.go:172] (0xc000138790) Reply frame received for 3\nI0710 11:46:29.042850    2653 log.go:172] (0xc000138790) (0xc0006ba500) Create stream\nI0710 11:46:29.042856    2653 log.go:172] (0xc000138790) (0xc0006ba500) Stream added, broadcasting: 5\nI0710 11:46:29.043495    2653 log.go:172] (0xc000138790) Reply frame received for 5\nI0710 11:46:29.103251    2653 log.go:172] (0xc000138790) Data frame received for 5\nI0710 11:46:29.103311    2653 log.go:172] (0xc000138790) Data frame received for 3\nI0710 11:46:29.103359    2653 log.go:172] (0xc0006ba460) (3) Data frame handling\nI0710 11:46:29.103376    2653 log.go:172] (0xc0006ba460) (3) Data frame sent\nI0710 11:46:29.103387    2653 log.go:172] (0xc000138790) Data frame received for 3\nI0710 11:46:29.103397    2653 log.go:172] (0xc0006ba460) (3) Data frame handling\nI0710 11:46:29.103440    2653 log.go:172] (0xc0006ba500) (5) Data frame handling\nI0710 11:46:29.104189    2653 log.go:172] (0xc000138790) Data frame received for 1\nI0710 11:46:29.104223    2653 log.go:172] (0xc0005c7400) (1) Data frame handling\nI0710 11:46:29.104237    2653 log.go:172] (0xc0005c7400) (1) Data frame sent\nI0710 11:46:29.104250    2653 log.go:172] (0xc000138790) (0xc0005c7400) Stream removed, broadcasting: 1\nI0710 11:46:29.104272    2653 log.go:172] (0xc000138790) Go away received\nI0710 11:46:29.104518    2653 log.go:172] (0xc000138790) (0xc0005c7400) Stream removed, broadcasting: 1\nI0710 11:46:29.104538    2653 log.go:172] (0xc000138790) (0xc0006ba460) Stream removed, broadcasting: 3\nI0710 11:46:29.104548    2653 log.go:172] (0xc000138790) (0xc0006ba500) Stream removed, broadcasting: 5\n"
Jul 10 11:46:29.109: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 10 11:46:29.110: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 10 11:46:29.110: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul 10 11:46:49.186: INFO: Deleting all statefulset in ns e2e-tests-statefulset-4fr7h
Jul 10 11:46:49.189: INFO: Scaling statefulset ss to 0
Jul 10 11:46:49.197: INFO: Waiting for statefulset status.replicas updated to 0
Jul 10 11:46:49.199: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:46:49.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-4fr7h" for this suite.
Jul 10 11:46:57.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:46:57.290: INFO: namespace: e2e-tests-statefulset-4fr7h, resource: bindings, ignored listing per whitelist
Jul 10 11:46:57.583: INFO: namespace e2e-tests-statefulset-4fr7h deletion completed in 8.363252754s

• [SLOW TEST:92.728 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:46:57.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-0f5511d8-c2a3-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume secrets
Jul 10 11:46:58.398: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0f5ee584-c2a3-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-g4rgc" to be "success or failure"
Jul 10 11:46:58.437: INFO: Pod "pod-projected-secrets-0f5ee584-c2a3-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 38.652015ms
Jul 10 11:47:00.441: INFO: Pod "pod-projected-secrets-0f5ee584-c2a3-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042225419s
Jul 10 11:47:02.850: INFO: Pod "pod-projected-secrets-0f5ee584-c2a3-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.451252746s
Jul 10 11:47:04.853: INFO: Pod "pod-projected-secrets-0f5ee584-c2a3-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.45431207s
STEP: Saw pod success
Jul 10 11:47:04.853: INFO: Pod "pod-projected-secrets-0f5ee584-c2a3-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 11:47:04.855: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-0f5ee584-c2a3-11ea-a406-0242ac11000f container projected-secret-volume-test: 
STEP: delete the pod
Jul 10 11:47:04.912: INFO: Waiting for pod pod-projected-secrets-0f5ee584-c2a3-11ea-a406-0242ac11000f to disappear
Jul 10 11:47:04.916: INFO: Pod pod-projected-secrets-0f5ee584-c2a3-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:47:04.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-g4rgc" for this suite.
Jul 10 11:47:15.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:47:15.298: INFO: namespace: e2e-tests-projected-g4rgc, resource: bindings, ignored listing per whitelist
Jul 10 11:47:15.304: INFO: namespace e2e-tests-projected-g4rgc deletion completed in 10.384557714s

• [SLOW TEST:17.720 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:47:15.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-d24c
STEP: Creating a pod to test atomic-volume-subpath
Jul 10 11:47:15.620: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-d24c" in namespace "e2e-tests-subpath-vtjtw" to be "success or failure"
Jul 10 11:47:15.853: INFO: Pod "pod-subpath-test-projected-d24c": Phase="Pending", Reason="", readiness=false. Elapsed: 232.888578ms
Jul 10 11:47:17.856: INFO: Pod "pod-subpath-test-projected-d24c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236256924s
Jul 10 11:47:19.860: INFO: Pod "pod-subpath-test-projected-d24c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240401012s
Jul 10 11:47:21.934: INFO: Pod "pod-subpath-test-projected-d24c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.31388301s
Jul 10 11:47:24.165: INFO: Pod "pod-subpath-test-projected-d24c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.545255922s
Jul 10 11:47:26.544: INFO: Pod "pod-subpath-test-projected-d24c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.924081592s
Jul 10 11:47:28.695: INFO: Pod "pod-subpath-test-projected-d24c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.074611521s
Jul 10 11:47:30.790: INFO: Pod "pod-subpath-test-projected-d24c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.17025241s
Jul 10 11:47:33.036: INFO: Pod "pod-subpath-test-projected-d24c": Phase="Running", Reason="", readiness=false. Elapsed: 17.416119843s
Jul 10 11:47:35.040: INFO: Pod "pod-subpath-test-projected-d24c": Phase="Running", Reason="", readiness=false. Elapsed: 19.419919417s
Jul 10 11:47:37.044: INFO: Pod "pod-subpath-test-projected-d24c": Phase="Running", Reason="", readiness=false. Elapsed: 21.4235925s
Jul 10 11:47:39.048: INFO: Pod "pod-subpath-test-projected-d24c": Phase="Running", Reason="", readiness=false. Elapsed: 23.428190307s
Jul 10 11:47:43.887: INFO: Pod "pod-subpath-test-projected-d24c": Phase="Running", Reason="", readiness=false. Elapsed: 28.266943482s
Jul 10 11:47:46.048: INFO: Pod "pod-subpath-test-projected-d24c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.42807644s
STEP: Saw pod success
Jul 10 11:47:46.048: INFO: Pod "pod-subpath-test-projected-d24c" satisfied condition "success or failure"
Jul 10 11:47:46.050: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-d24c container test-container-subpath-projected-d24c: 
STEP: delete the pod
Jul 10 11:47:46.482: INFO: Waiting for pod pod-subpath-test-projected-d24c to disappear
Jul 10 11:47:46.653: INFO: Pod pod-subpath-test-projected-d24c no longer exists
STEP: Deleting pod pod-subpath-test-projected-d24c
Jul 10 11:47:46.653: INFO: Deleting pod "pod-subpath-test-projected-d24c" in namespace "e2e-tests-subpath-vtjtw"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:47:46.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-vtjtw" for this suite.
Jul 10 11:47:53.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:47:53.379: INFO: namespace: e2e-tests-subpath-vtjtw, resource: bindings, ignored listing per whitelist
Jul 10 11:47:53.408: INFO: namespace e2e-tests-subpath-vtjtw deletion completed in 6.548237328s

• [SLOW TEST:38.105 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:47:53.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 10 11:47:53.769: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3069a919-c2a3-11ea-a406-0242ac11000f" in namespace "e2e-tests-downward-api-f2mj6" to be "success or failure"
Jul 10 11:47:54.138: INFO: Pod "downwardapi-volume-3069a919-c2a3-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 368.656923ms
Jul 10 11:47:56.142: INFO: Pod "downwardapi-volume-3069a919-c2a3-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.372705402s
Jul 10 11:47:58.146: INFO: Pod "downwardapi-volume-3069a919-c2a3-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.376266056s
Jul 10 11:48:00.150: INFO: Pod "downwardapi-volume-3069a919-c2a3-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.380419944s
Jul 10 11:48:02.163: INFO: Pod "downwardapi-volume-3069a919-c2a3-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.393228219s
STEP: Saw pod success
Jul 10 11:48:02.163: INFO: Pod "downwardapi-volume-3069a919-c2a3-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 11:48:02.165: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-3069a919-c2a3-11ea-a406-0242ac11000f container client-container: 
STEP: delete the pod
Jul 10 11:48:02.477: INFO: Waiting for pod downwardapi-volume-3069a919-c2a3-11ea-a406-0242ac11000f to disappear
Jul 10 11:48:02.774: INFO: Pod downwardapi-volume-3069a919-c2a3-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:48:02.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-f2mj6" for this suite.
Jul 10 11:48:09.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:48:09.056: INFO: namespace: e2e-tests-downward-api-f2mj6, resource: bindings, ignored listing per whitelist
Jul 10 11:48:09.099: INFO: namespace e2e-tests-downward-api-f2mj6 deletion completed in 6.321910345s

• [SLOW TEST:15.691 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:48:09.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 10 11:48:09.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-v894q'
Jul 10 11:48:09.971: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 10 11:48:09.971: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jul 10 11:48:10.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-v894q'
Jul 10 11:48:11.064: INFO: stderr: ""
Jul 10 11:48:11.064: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:48:11.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-v894q" for this suite.
Jul 10 11:48:18.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:48:18.501: INFO: namespace: e2e-tests-kubectl-v894q, resource: bindings, ignored listing per whitelist
Jul 10 11:48:18.530: INFO: namespace e2e-tests-kubectl-v894q deletion completed in 7.156925037s

• [SLOW TEST:9.430 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:48:18.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-3f56a357-c2a3-11ea-a406-0242ac11000f
STEP: Creating secret with name s-test-opt-upd-3f56a3d0-c2a3-11ea-a406-0242ac11000f
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-3f56a357-c2a3-11ea-a406-0242ac11000f
STEP: Updating secret s-test-opt-upd-3f56a3d0-c2a3-11ea-a406-0242ac11000f
STEP: Creating secret with name s-test-opt-create-3f56a3f9-c2a3-11ea-a406-0242ac11000f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:49:58.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-kdmq7" for this suite.
Jul 10 11:50:22.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:50:22.121: INFO: namespace: e2e-tests-secrets-kdmq7, resource: bindings, ignored listing per whitelist
Jul 10 11:50:22.161: INFO: namespace e2e-tests-secrets-kdmq7 deletion completed in 24.086685734s

• [SLOW TEST:123.631 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:50:22.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jul 10 11:50:22.646: INFO: Waiting up to 5m0s for pod "var-expansion-893e5edc-c2a3-11ea-a406-0242ac11000f" in namespace "e2e-tests-var-expansion-2vlwb" to be "success or failure"
Jul 10 11:50:22.757: INFO: Pod "var-expansion-893e5edc-c2a3-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 111.378259ms
Jul 10 11:50:25.032: INFO: Pod "var-expansion-893e5edc-c2a3-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.386914698s
Jul 10 11:50:27.062: INFO: Pod "var-expansion-893e5edc-c2a3-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.416581449s
STEP: Saw pod success
Jul 10 11:50:27.062: INFO: Pod "var-expansion-893e5edc-c2a3-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 11:50:27.064: INFO: Trying to get logs from node hunter-worker pod var-expansion-893e5edc-c2a3-11ea-a406-0242ac11000f container dapi-container: 
STEP: delete the pod
Jul 10 11:50:27.134: INFO: Waiting for pod var-expansion-893e5edc-c2a3-11ea-a406-0242ac11000f to disappear
Jul 10 11:50:27.465: INFO: Pod var-expansion-893e5edc-c2a3-11ea-a406-0242ac11000f no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:50:27.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-2vlwb" for this suite.
Jul 10 11:50:33.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:50:33.888: INFO: namespace: e2e-tests-var-expansion-2vlwb, resource: bindings, ignored listing per whitelist
Jul 10 11:50:33.938: INFO: namespace e2e-tests-var-expansion-2vlwb deletion completed in 6.415373481s

• [SLOW TEST:11.777 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:50:33.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:50:42.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-btnj8" for this suite.
Jul 10 11:51:24.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:51:24.521: INFO: namespace: e2e-tests-kubelet-test-btnj8, resource: bindings, ignored listing per whitelist
Jul 10 11:51:24.565: INFO: namespace e2e-tests-kubelet-test-btnj8 deletion completed in 42.258475015s

• [SLOW TEST:50.627 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:51:24.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-6hgh9
I0710 11:51:25.458867       6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-6hgh9, replica count: 1
I0710 11:51:26.509395       6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0710 11:51:27.509633       6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0710 11:51:28.509866       6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0710 11:51:29.510080       6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0710 11:51:30.510328       6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0710 11:51:31.510557       6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0710 11:51:32.510787       6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 10 11:51:33.199: INFO: Created: latency-svc-gnskb
Jul 10 11:51:33.465: INFO: Got endpoints: latency-svc-gnskb [854.036285ms]
Jul 10 11:51:33.799: INFO: Created: latency-svc-x5w45
Jul 10 11:51:34.039: INFO: Got endpoints: latency-svc-x5w45 [574.32229ms]
Jul 10 11:51:34.052: INFO: Created: latency-svc-rndcr
Jul 10 11:51:34.363: INFO: Got endpoints: latency-svc-rndcr [897.972265ms]
Jul 10 11:51:34.837: INFO: Created: latency-svc-7nthh
Jul 10 11:51:35.003: INFO: Got endpoints: latency-svc-7nthh [1.538358105s]
Jul 10 11:51:35.059: INFO: Created: latency-svc-ct2dw
Jul 10 11:51:35.080: INFO: Got endpoints: latency-svc-ct2dw [1.614963002s]
Jul 10 11:51:35.681: INFO: Created: latency-svc-nfhp7
Jul 10 11:51:35.714: INFO: Got endpoints: latency-svc-nfhp7 [2.248953843s]
Jul 10 11:51:35.939: INFO: Created: latency-svc-rpkdq
Jul 10 11:51:35.979: INFO: Got endpoints: latency-svc-rpkdq [2.513562231s]
Jul 10 11:51:36.132: INFO: Created: latency-svc-5xswc
Jul 10 11:51:36.548: INFO: Got endpoints: latency-svc-5xswc [3.08283724s]
Jul 10 11:51:36.560: INFO: Created: latency-svc-2bctp
Jul 10 11:51:36.579: INFO: Got endpoints: latency-svc-2bctp [3.11397496s]
Jul 10 11:51:37.004: INFO: Created: latency-svc-q82q5
Jul 10 11:51:37.040: INFO: Got endpoints: latency-svc-q82q5 [3.575004955s]
Jul 10 11:51:37.101: INFO: Created: latency-svc-cbh9b
Jul 10 11:51:37.399: INFO: Got endpoints: latency-svc-cbh9b [3.934066583s]
Jul 10 11:51:37.402: INFO: Created: latency-svc-5rfpz
Jul 10 11:51:37.412: INFO: Got endpoints: latency-svc-5rfpz [3.947043s]
Jul 10 11:51:37.453: INFO: Created: latency-svc-fsf56
Jul 10 11:51:37.485: INFO: Got endpoints: latency-svc-fsf56 [4.019860657s]
Jul 10 11:51:37.578: INFO: Created: latency-svc-tldh4
Jul 10 11:51:37.581: INFO: Got endpoints: latency-svc-tldh4 [4.115777019s]
Jul 10 11:51:37.616: INFO: Created: latency-svc-k5kzj
Jul 10 11:51:37.635: INFO: Got endpoints: latency-svc-k5kzj [4.169738602s]
Jul 10 11:51:37.678: INFO: Created: latency-svc-zwdc8
Jul 10 11:51:37.818: INFO: Got endpoints: latency-svc-zwdc8 [4.352487618s]
Jul 10 11:51:37.821: INFO: Created: latency-svc-kp7vk
Jul 10 11:51:37.895: INFO: Got endpoints: latency-svc-kp7vk [3.855306611s]
Jul 10 11:51:38.067: INFO: Created: latency-svc-kb48s
Jul 10 11:51:38.115: INFO: Got endpoints: latency-svc-kb48s [3.752385897s]
Jul 10 11:51:38.364: INFO: Created: latency-svc-7zxz5
Jul 10 11:51:38.554: INFO: Got endpoints: latency-svc-7zxz5 [3.55087343s]
Jul 10 11:51:38.598: INFO: Created: latency-svc-fnvsm
Jul 10 11:51:38.859: INFO: Got endpoints: latency-svc-fnvsm [3.779067662s]
Jul 10 11:51:38.863: INFO: Created: latency-svc-jc2lw
Jul 10 11:51:39.117: INFO: Got endpoints: latency-svc-jc2lw [3.402870057s]
Jul 10 11:51:39.186: INFO: Created: latency-svc-jljt9
Jul 10 11:51:39.279: INFO: Got endpoints: latency-svc-jljt9 [3.300047073s]
Jul 10 11:51:39.283: INFO: Created: latency-svc-jd7h8
Jul 10 11:51:39.309: INFO: Got endpoints: latency-svc-jd7h8 [2.760702869s]
Jul 10 11:51:39.346: INFO: Created: latency-svc-2svmt
Jul 10 11:51:39.465: INFO: Got endpoints: latency-svc-2svmt [2.886250313s]
Jul 10 11:51:39.467: INFO: Created: latency-svc-7zfw7
Jul 10 11:51:39.495: INFO: Got endpoints: latency-svc-7zfw7 [2.45446272s]
Jul 10 11:51:39.516: INFO: Created: latency-svc-hh5xj
Jul 10 11:51:39.532: INFO: Got endpoints: latency-svc-hh5xj [2.13219992s]
Jul 10 11:51:39.553: INFO: Created: latency-svc-5xmlp
Jul 10 11:51:39.626: INFO: Got endpoints: latency-svc-5xmlp [2.21339843s]
Jul 10 11:51:39.628: INFO: Created: latency-svc-8855r
Jul 10 11:51:39.658: INFO: Got endpoints: latency-svc-8855r [2.172472886s]
Jul 10 11:51:39.677: INFO: Created: latency-svc-qkjnx
Jul 10 11:51:39.700: INFO: Got endpoints: latency-svc-qkjnx [2.118898171s]
Jul 10 11:51:39.807: INFO: Created: latency-svc-p9nln
Jul 10 11:51:39.811: INFO: Got endpoints: latency-svc-p9nln [2.175537895s]
Jul 10 11:51:39.845: INFO: Created: latency-svc-5fkdf
Jul 10 11:51:39.875: INFO: Got endpoints: latency-svc-5fkdf [2.05652658s]
Jul 10 11:51:39.901: INFO: Created: latency-svc-d5bhn
Jul 10 11:51:39.974: INFO: Got endpoints: latency-svc-d5bhn [2.078860346s]
Jul 10 11:51:40.011: INFO: Created: latency-svc-9jgdp
Jul 10 11:51:40.027: INFO: Got endpoints: latency-svc-9jgdp [1.911734835s]
Jul 10 11:51:40.058: INFO: Created: latency-svc-kjzvm
Jul 10 11:51:40.117: INFO: Got endpoints: latency-svc-kjzvm [1.562666032s]
Jul 10 11:51:40.130: INFO: Created: latency-svc-z48kb
Jul 10 11:51:40.130: INFO: Got endpoints: latency-svc-z48kb [1.270889508s]
Jul 10 11:51:40.159: INFO: Created: latency-svc-bs6qs
Jul 10 11:51:40.176: INFO: Got endpoints: latency-svc-bs6qs [1.058918959s]
Jul 10 11:51:40.195: INFO: Created: latency-svc-bgrpb
Jul 10 11:51:40.213: INFO: Got endpoints: latency-svc-bgrpb [933.592077ms]
Jul 10 11:51:40.291: INFO: Created: latency-svc-n5hg8
Jul 10 11:51:40.308: INFO: Got endpoints: latency-svc-n5hg8 [999.267313ms]
Jul 10 11:51:40.361: INFO: Created: latency-svc-nf9xh
Jul 10 11:51:40.374: INFO: Got endpoints: latency-svc-nf9xh [909.038862ms]
Jul 10 11:51:40.471: INFO: Created: latency-svc-dzpk4
Jul 10 11:51:40.475: INFO: Got endpoints: latency-svc-dzpk4 [979.905096ms]
Jul 10 11:51:40.506: INFO: Created: latency-svc-x4mw6
Jul 10 11:51:40.519: INFO: Got endpoints: latency-svc-x4mw6 [987.106436ms]
Jul 10 11:51:40.555: INFO: Created: latency-svc-l622c
Jul 10 11:51:40.650: INFO: Got endpoints: latency-svc-l622c [1.023957716s]
Jul 10 11:51:40.653: INFO: Created: latency-svc-45d6b
Jul 10 11:51:40.663: INFO: Got endpoints: latency-svc-45d6b [1.005440863s]
Jul 10 11:51:40.688: INFO: Created: latency-svc-c98hh
Jul 10 11:51:40.730: INFO: Got endpoints: latency-svc-c98hh [1.029363068s]
Jul 10 11:51:40.885: INFO: Created: latency-svc-28xc6
Jul 10 11:51:40.889: INFO: Got endpoints: latency-svc-28xc6 [1.07774359s]
Jul 10 11:51:40.933: INFO: Created: latency-svc-grm8l
Jul 10 11:51:40.946: INFO: Got endpoints: latency-svc-grm8l [1.071453463s]
Jul 10 11:51:40.970: INFO: Created: latency-svc-s4vvt
Jul 10 11:51:40.983: INFO: Got endpoints: latency-svc-s4vvt [1.009299879s]
Jul 10 11:51:41.064: INFO: Created: latency-svc-dc7jf
Jul 10 11:51:41.066: INFO: Got endpoints: latency-svc-dc7jf [1.038952161s]
Jul 10 11:51:41.144: INFO: Created: latency-svc-6crjj
Jul 10 11:51:41.163: INFO: Got endpoints: latency-svc-6crjj [1.045726912s]
Jul 10 11:51:41.577: INFO: Created: latency-svc-z7lll
Jul 10 11:51:41.938: INFO: Got endpoints: latency-svc-z7lll [1.807255967s]
Jul 10 11:51:41.975: INFO: Created: latency-svc-xswlx
Jul 10 11:51:41.994: INFO: Got endpoints: latency-svc-xswlx [1.817893874s]
Jul 10 11:51:42.105: INFO: Created: latency-svc-wd9h9
Jul 10 11:51:42.132: INFO: Got endpoints: latency-svc-wd9h9 [1.919456983s]
Jul 10 11:51:42.160: INFO: Created: latency-svc-kslfj
Jul 10 11:51:42.174: INFO: Got endpoints: latency-svc-kslfj [1.865905134s]
Jul 10 11:51:42.202: INFO: Created: latency-svc-nrv8h
Jul 10 11:51:42.284: INFO: Got endpoints: latency-svc-nrv8h [1.910081097s]
Jul 10 11:51:42.287: INFO: Created: latency-svc-87sv5
Jul 10 11:51:42.289: INFO: Got endpoints: latency-svc-87sv5 [1.814520718s]
Jul 10 11:51:42.345: INFO: Created: latency-svc-tnfx2
Jul 10 11:51:42.361: INFO: Got endpoints: latency-svc-tnfx2 [1.8421612s]
Jul 10 11:51:42.465: INFO: Created: latency-svc-mvxjx
Jul 10 11:51:42.467: INFO: Got endpoints: latency-svc-mvxjx [1.817011723s]
Jul 10 11:51:42.495: INFO: Created: latency-svc-bvkjx
Jul 10 11:51:42.512: INFO: Got endpoints: latency-svc-bvkjx [1.848099074s]
Jul 10 11:51:42.531: INFO: Created: latency-svc-hhn2t
Jul 10 11:51:42.560: INFO: Got endpoints: latency-svc-hhn2t [1.830167093s]
Jul 10 11:51:42.650: INFO: Created: latency-svc-jd5fk
Jul 10 11:51:42.654: INFO: Got endpoints: latency-svc-jd5fk [1.765015065s]
Jul 10 11:51:42.687: INFO: Created: latency-svc-57ks2
Jul 10 11:51:42.704: INFO: Got endpoints: latency-svc-57ks2 [1.75800962s]
Jul 10 11:51:42.731: INFO: Created: latency-svc-j97xj
Jul 10 11:51:42.806: INFO: Got endpoints: latency-svc-j97xj [1.822861438s]
Jul 10 11:51:42.827: INFO: Created: latency-svc-s7zq4
Jul 10 11:51:42.861: INFO: Got endpoints: latency-svc-s7zq4 [1.794589656s]
Jul 10 11:51:42.905: INFO: Created: latency-svc-9cmw2
Jul 10 11:51:42.973: INFO: Got endpoints: latency-svc-9cmw2 [1.810374307s]
Jul 10 11:51:42.976: INFO: Created: latency-svc-h7bqd
Jul 10 11:51:42.981: INFO: Got endpoints: latency-svc-h7bqd [1.042903788s]
Jul 10 11:51:43.007: INFO: Created: latency-svc-ck7sx
Jul 10 11:51:43.023: INFO: Got endpoints: latency-svc-ck7sx [1.029313437s]
Jul 10 11:51:43.067: INFO: Created: latency-svc-k6984
Jul 10 11:51:43.171: INFO: Got endpoints: latency-svc-k6984 [1.038614857s]
Jul 10 11:51:43.173: INFO: Created: latency-svc-zhqfl
Jul 10 11:51:43.180: INFO: Got endpoints: latency-svc-zhqfl [1.005302063s]
Jul 10 11:51:43.729: INFO: Created: latency-svc-7vdmg
Jul 10 11:51:44.147: INFO: Created: latency-svc-thxdr
Jul 10 11:51:44.507: INFO: Got endpoints: latency-svc-7vdmg [2.222160189s]
Jul 10 11:51:44.509: INFO: Created: latency-svc-bh4vx
Jul 10 11:51:44.535: INFO: Got endpoints: latency-svc-bh4vx [2.173695021s]
Jul 10 11:51:44.587: INFO: Created: latency-svc-z7qwk
Jul 10 11:51:44.587: INFO: Got endpoints: latency-svc-thxdr [2.297469567s]
Jul 10 11:51:44.926: INFO: Got endpoints: latency-svc-z7qwk [2.458597971s]
Jul 10 11:51:44.929: INFO: Created: latency-svc-mjlb7
Jul 10 11:51:44.937: INFO: Got endpoints: latency-svc-mjlb7 [2.42490055s]
Jul 10 11:51:44.966: INFO: Created: latency-svc-nzsw9
Jul 10 11:51:44.979: INFO: Got endpoints: latency-svc-nzsw9 [2.41878089s]
Jul 10 11:51:45.006: INFO: Created: latency-svc-2qsrl
Jul 10 11:51:45.081: INFO: Got endpoints: latency-svc-2qsrl [2.427662386s]
Jul 10 11:51:45.083: INFO: Created: latency-svc-qqgct
Jul 10 11:51:45.112: INFO: Got endpoints: latency-svc-qqgct [2.407651365s]
Jul 10 11:51:45.146: INFO: Created: latency-svc-kcddq
Jul 10 11:51:45.518: INFO: Got endpoints: latency-svc-kcddq [2.712428283s]
Jul 10 11:51:45.541: INFO: Created: latency-svc-hwr24
Jul 10 11:51:45.567: INFO: Got endpoints: latency-svc-hwr24 [2.706252876s]
Jul 10 11:51:46.035: INFO: Created: latency-svc-d64l7
Jul 10 11:51:46.038: INFO: Got endpoints: latency-svc-d64l7 [3.064338269s]
Jul 10 11:51:46.108: INFO: Created: latency-svc-hj8fr
Jul 10 11:51:46.125: INFO: Got endpoints: latency-svc-hj8fr [3.14388182s]
Jul 10 11:51:46.255: INFO: Created: latency-svc-zhwd9
Jul 10 11:51:46.258: INFO: Got endpoints: latency-svc-zhwd9 [3.234224368s]
Jul 10 11:51:46.300: INFO: Created: latency-svc-rzn84
Jul 10 11:51:46.347: INFO: Got endpoints: latency-svc-rzn84 [3.176155989s]
Jul 10 11:51:46.477: INFO: Created: latency-svc-n4cxs
Jul 10 11:51:46.479: INFO: Got endpoints: latency-svc-n4cxs [3.299542481s]
Jul 10 11:51:46.539: INFO: Created: latency-svc-pdnlx
Jul 10 11:51:46.551: INFO: Got endpoints: latency-svc-pdnlx [2.044688418s]
Jul 10 11:51:46.650: INFO: Created: latency-svc-m4hzx
Jul 10 11:51:46.653: INFO: Got endpoints: latency-svc-m4hzx [2.11810431s]
Jul 10 11:51:47.142: INFO: Created: latency-svc-5c6rd
Jul 10 11:51:47.160: INFO: Got endpoints: latency-svc-5c6rd [2.572713275s]
Jul 10 11:51:47.199: INFO: Created: latency-svc-p2mj5
Jul 10 11:51:47.223: INFO: Got endpoints: latency-svc-p2mj5 [2.297822344s]
Jul 10 11:51:47.345: INFO: Created: latency-svc-wjtn2
Jul 10 11:51:47.348: INFO: Got endpoints: latency-svc-wjtn2 [2.411606993s]
Jul 10 11:51:47.424: INFO: Created: latency-svc-pshlf
Jul 10 11:51:47.506: INFO: Got endpoints: latency-svc-pshlf [2.527334216s]
Jul 10 11:51:47.525: INFO: Created: latency-svc-hfj4l
Jul 10 11:51:47.542: INFO: Got endpoints: latency-svc-hfj4l [2.460653544s]
Jul 10 11:51:47.571: INFO: Created: latency-svc-m56bq
Jul 10 11:51:47.584: INFO: Got endpoints: latency-svc-m56bq [2.472103022s]
Jul 10 11:51:47.729: INFO: Created: latency-svc-bzv7h
Jul 10 11:51:47.733: INFO: Got endpoints: latency-svc-bzv7h [2.214915722s]
Jul 10 11:51:47.816: INFO: Created: latency-svc-5drjn
Jul 10 11:51:47.878: INFO: Got endpoints: latency-svc-5drjn [2.31051532s]
Jul 10 11:51:47.880: INFO: Created: latency-svc-rcxvf
Jul 10 11:51:47.884: INFO: Got endpoints: latency-svc-rcxvf [1.846508816s]
Jul 10 11:51:47.911: INFO: Created: latency-svc-cpwmc
Jul 10 11:51:47.927: INFO: Got endpoints: latency-svc-cpwmc [1.801945612s]
Jul 10 11:51:47.946: INFO: Created: latency-svc-t778s
Jul 10 11:51:47.971: INFO: Got endpoints: latency-svc-t778s [1.712771881s]
Jul 10 11:51:48.022: INFO: Created: latency-svc-6hxfp
Jul 10 11:51:48.036: INFO: Got endpoints: latency-svc-6hxfp [1.688614092s]
Jul 10 11:51:48.078: INFO: Created: latency-svc-prslw
Jul 10 11:51:48.090: INFO: Got endpoints: latency-svc-prslw [1.610256893s]
Jul 10 11:51:48.118: INFO: Created: latency-svc-jsnqs
Jul 10 11:51:48.183: INFO: Got endpoints: latency-svc-jsnqs [1.631263657s]
Jul 10 11:51:48.228: INFO: Created: latency-svc-4slcf
Jul 10 11:51:48.268: INFO: Got endpoints: latency-svc-4slcf [1.614933511s]
Jul 10 11:51:48.363: INFO: Created: latency-svc-k9khj
Jul 10 11:51:48.366: INFO: Got endpoints: latency-svc-k9khj [1.206364717s]
Jul 10 11:51:48.543: INFO: Created: latency-svc-4czqn
Jul 10 11:51:48.548: INFO: Got endpoints: latency-svc-4czqn [1.324636387s]
Jul 10 11:51:48.574: INFO: Created: latency-svc-jbplb
Jul 10 11:51:48.589: INFO: Got endpoints: latency-svc-jbplb [1.241197172s]
Jul 10 11:51:48.723: INFO: Created: latency-svc-fwfnf
Jul 10 11:51:48.727: INFO: Got endpoints: latency-svc-fwfnf [1.220760505s]
Jul 10 11:51:48.821: INFO: Created: latency-svc-vh4dl
Jul 10 11:51:48.902: INFO: Got endpoints: latency-svc-vh4dl [1.359389913s]
Jul 10 11:51:48.967: INFO: Created: latency-svc-7s2vx
Jul 10 11:51:49.584: INFO: Got endpoints: latency-svc-7s2vx [2.000246279s]
Jul 10 11:51:49.588: INFO: Created: latency-svc-gc5w4
Jul 10 11:51:49.616: INFO: Got endpoints: latency-svc-gc5w4 [1.882444572s]
Jul 10 11:51:49.825: INFO: Created: latency-svc-qr68h
Jul 10 11:51:49.828: INFO: Got endpoints: latency-svc-qr68h [1.950429863s]
Jul 10 11:51:50.046: INFO: Created: latency-svc-bprj5
Jul 10 11:51:50.061: INFO: Got endpoints: latency-svc-bprj5 [2.176273765s]
Jul 10 11:51:50.457: INFO: Created: latency-svc-76s95
Jul 10 11:51:50.681: INFO: Got endpoints: latency-svc-76s95 [2.754133984s]
Jul 10 11:51:50.686: INFO: Created: latency-svc-z4js5
Jul 10 11:51:50.714: INFO: Got endpoints: latency-svc-z4js5 [2.743483728s]
Jul 10 11:51:51.277: INFO: Created: latency-svc-5v8w5
Jul 10 11:51:51.344: INFO: Got endpoints: latency-svc-5v8w5 [3.307987583s]
Jul 10 11:51:51.561: INFO: Created: latency-svc-wnsts
Jul 10 11:51:51.643: INFO: Got endpoints: latency-svc-wnsts [3.553137242s]
Jul 10 11:51:51.891: INFO: Created: latency-svc-29m2d
Jul 10 11:51:51.894: INFO: Got endpoints: latency-svc-29m2d [3.710951377s]
Jul 10 11:51:51.973: INFO: Created: latency-svc-v9n7j
Jul 10 11:51:52.033: INFO: Got endpoints: latency-svc-v9n7j [3.765235805s]
Jul 10 11:51:52.036: INFO: Created: latency-svc-grnlk
Jul 10 11:51:52.051: INFO: Got endpoints: latency-svc-grnlk [3.684908743s]
Jul 10 11:51:52.096: INFO: Created: latency-svc-ckpmh
Jul 10 11:51:52.112: INFO: Got endpoints: latency-svc-ckpmh [3.563512771s]
Jul 10 11:51:52.195: INFO: Created: latency-svc-8jswz
Jul 10 11:51:52.207: INFO: Got endpoints: latency-svc-8jswz [3.617730983s]
Jul 10 11:51:52.389: INFO: Created: latency-svc-2hw42
Jul 10 11:51:52.391: INFO: Got endpoints: latency-svc-2hw42 [3.663553765s]
Jul 10 11:51:52.433: INFO: Created: latency-svc-sgt4x
Jul 10 11:51:52.442: INFO: Got endpoints: latency-svc-sgt4x [3.539979632s]
Jul 10 11:51:52.465: INFO: Created: latency-svc-jslnl
Jul 10 11:51:52.478: INFO: Got endpoints: latency-svc-jslnl [2.893426435s]
Jul 10 11:51:52.543: INFO: Created: latency-svc-hv4zl
Jul 10 11:51:52.545: INFO: Got endpoints: latency-svc-hv4zl [2.928903144s]
Jul 10 11:51:52.627: INFO: Created: latency-svc-w5bgn
Jul 10 11:51:52.728: INFO: Got endpoints: latency-svc-w5bgn [2.899496337s]
Jul 10 11:51:52.730: INFO: Created: latency-svc-m92ch
Jul 10 11:51:52.743: INFO: Got endpoints: latency-svc-m92ch [2.682362082s]
Jul 10 11:51:53.251: INFO: Created: latency-svc-9fb7g
Jul 10 11:51:53.294: INFO: Got endpoints: latency-svc-9fb7g [2.613158234s]
Jul 10 11:51:53.342: INFO: Created: latency-svc-zhgmh
Jul 10 11:51:53.416: INFO: Got endpoints: latency-svc-zhgmh [2.702229585s]
Jul 10 11:51:53.432: INFO: Created: latency-svc-b8nwr
Jul 10 11:51:53.462: INFO: Got endpoints: latency-svc-b8nwr [2.118509049s]
Jul 10 11:51:53.940: INFO: Created: latency-svc-mvlwx
Jul 10 11:51:53.966: INFO: Got endpoints: latency-svc-mvlwx [2.323058047s]
Jul 10 11:51:54.011: INFO: Created: latency-svc-l4ssr
Jul 10 11:51:54.381: INFO: Got endpoints: latency-svc-l4ssr [2.487414635s]
Jul 10 11:51:54.411: INFO: Created: latency-svc-tqgt6
Jul 10 11:51:54.842: INFO: Got endpoints: latency-svc-tqgt6 [2.808901202s]
Jul 10 11:51:54.848: INFO: Created: latency-svc-rvhdm
Jul 10 11:51:55.239: INFO: Created: latency-svc-qvpc6
Jul 10 11:51:55.323: INFO: Got endpoints: latency-svc-rvhdm [3.271503688s]
Jul 10 11:51:55.323: INFO: Created: latency-svc-54swk
Jul 10 11:51:55.519: INFO: Got endpoints: latency-svc-54swk [3.311328611s]
Jul 10 11:51:55.522: INFO: Got endpoints: latency-svc-qvpc6 [3.410198759s]
Jul 10 11:51:55.800: INFO: Created: latency-svc-22zzt
Jul 10 11:51:55.803: INFO: Got endpoints: latency-svc-22zzt [3.412727877s]
Jul 10 11:51:55.900: INFO: Created: latency-svc-72ct6
Jul 10 11:51:56.669: INFO: Got endpoints: latency-svc-72ct6 [4.226892263s]
Jul 10 11:51:56.672: INFO: Created: latency-svc-cjb4j
Jul 10 11:51:56.680: INFO: Got endpoints: latency-svc-cjb4j [4.201776251s]
Jul 10 11:51:59.497: INFO: Created: latency-svc-95645
Jul 10 11:51:59.682: INFO: Got endpoints: latency-svc-95645 [7.137051067s]
Jul 10 11:51:59.911: INFO: Created: latency-svc-dc7d6
Jul 10 11:51:59.920: INFO: Got endpoints: latency-svc-dc7d6 [7.192627993s]
Jul 10 11:51:59.967: INFO: Created: latency-svc-wjn2d
Jul 10 11:52:00.201: INFO: Got endpoints: latency-svc-wjn2d [7.45809188s]
Jul 10 11:52:00.207: INFO: Created: latency-svc-njbch
Jul 10 11:52:00.238: INFO: Got endpoints: latency-svc-njbch [6.944061088s]
Jul 10 11:52:00.387: INFO: Created: latency-svc-pfrdg
Jul 10 11:52:00.391: INFO: Got endpoints: latency-svc-pfrdg [6.974033093s]
Jul 10 11:52:00.472: INFO: Created: latency-svc-4bprw
Jul 10 11:52:00.484: INFO: Got endpoints: latency-svc-4bprw [7.021762237s]
Jul 10 11:52:00.622: INFO: Created: latency-svc-f689l
Jul 10 11:52:00.658: INFO: Got endpoints: latency-svc-f689l [6.692530364s]
Jul 10 11:52:00.659: INFO: Created: latency-svc-g9tgv
Jul 10 11:52:00.682: INFO: Got endpoints: latency-svc-g9tgv [6.300929362s]
Jul 10 11:52:00.711: INFO: Created: latency-svc-kftdk
Jul 10 11:52:00.788: INFO: Got endpoints: latency-svc-kftdk [5.945625592s]
Jul 10 11:52:00.789: INFO: Created: latency-svc-285kz
Jul 10 11:52:00.821: INFO: Got endpoints: latency-svc-285kz [5.498330664s]
Jul 10 11:52:00.869: INFO: Created: latency-svc-jbmkk
Jul 10 11:52:00.881: INFO: Got endpoints: latency-svc-jbmkk [5.36279368s]
Jul 10 11:52:00.968: INFO: Created: latency-svc-s289w
Jul 10 11:52:00.971: INFO: Got endpoints: latency-svc-s289w [5.449450442s]
Jul 10 11:52:01.056: INFO: Created: latency-svc-9vznn
Jul 10 11:52:01.117: INFO: Got endpoints: latency-svc-9vznn [5.313762356s]
Jul 10 11:52:01.119: INFO: Created: latency-svc-v4h6t
Jul 10 11:52:01.128: INFO: Got endpoints: latency-svc-v4h6t [4.459106344s]
Jul 10 11:52:01.174: INFO: Created: latency-svc-jvttr
Jul 10 11:52:01.201: INFO: Got endpoints: latency-svc-jvttr [4.52097327s]
Jul 10 11:52:01.279: INFO: Created: latency-svc-9wwg2
Jul 10 11:52:01.282: INFO: Got endpoints: latency-svc-9wwg2 [1.599641512s]
Jul 10 11:52:01.327: INFO: Created: latency-svc-v9gqh
Jul 10 11:52:01.339: INFO: Got endpoints: latency-svc-v9gqh [1.418538188s]
Jul 10 11:52:01.361: INFO: Created: latency-svc-4z9sd
Jul 10 11:52:01.375: INFO: Got endpoints: latency-svc-4z9sd [1.173961961s]
Jul 10 11:52:01.435: INFO: Created: latency-svc-95w8d
Jul 10 11:52:01.437: INFO: Got endpoints: latency-svc-95w8d [1.19883827s]
Jul 10 11:52:01.475: INFO: Created: latency-svc-nbztn
Jul 10 11:52:01.490: INFO: Got endpoints: latency-svc-nbztn [1.099372651s]
Jul 10 11:52:01.524: INFO: Created: latency-svc-b68p5
Jul 10 11:52:01.626: INFO: Got endpoints: latency-svc-b68p5 [1.142086031s]
Jul 10 11:52:01.629: INFO: Created: latency-svc-zgl4t
Jul 10 11:52:01.634: INFO: Got endpoints: latency-svc-zgl4t [975.459756ms]
Jul 10 11:52:01.704: INFO: Created: latency-svc-zfrtl
Jul 10 11:52:01.718: INFO: Got endpoints: latency-svc-zfrtl [1.036090966s]
Jul 10 11:52:01.789: INFO: Created: latency-svc-db8tp
Jul 10 11:52:01.824: INFO: Created: latency-svc-rb4hb
Jul 10 11:52:01.824: INFO: Got endpoints: latency-svc-db8tp [1.036252162s]
Jul 10 11:52:01.839: INFO: Got endpoints: latency-svc-rb4hb [1.017796138s]
Jul 10 11:52:01.872: INFO: Created: latency-svc-q7sl6
Jul 10 11:52:01.881: INFO: Got endpoints: latency-svc-q7sl6 [999.874467ms]
Jul 10 11:52:01.968: INFO: Created: latency-svc-p8vqx
Jul 10 11:52:01.973: INFO: Got endpoints: latency-svc-p8vqx [1.001172234s]
Jul 10 11:52:01.998: INFO: Created: latency-svc-8gzp6
Jul 10 11:52:02.021: INFO: Got endpoints: latency-svc-8gzp6 [903.966047ms]
Jul 10 11:52:02.142: INFO: Created: latency-svc-cq28b
Jul 10 11:52:02.145: INFO: Got endpoints: latency-svc-cq28b [1.016968468s]
Jul 10 11:52:02.365: INFO: Created: latency-svc-srqmg
Jul 10 11:52:02.368: INFO: Got endpoints: latency-svc-srqmg [1.16718746s]
Jul 10 11:52:02.444: INFO: Created: latency-svc-t9fgt
Jul 10 11:52:02.536: INFO: Got endpoints: latency-svc-t9fgt [1.254549576s]
Jul 10 11:52:02.550: INFO: Created: latency-svc-v9mbl
Jul 10 11:52:02.588: INFO: Got endpoints: latency-svc-v9mbl [1.249022318s]
Jul 10 11:52:02.630: INFO: Created: latency-svc-dcqpm
Jul 10 11:52:02.680: INFO: Got endpoints: latency-svc-dcqpm [1.30509474s]
Jul 10 11:52:02.701: INFO: Created: latency-svc-hpbdf
Jul 10 11:52:02.732: INFO: Got endpoints: latency-svc-hpbdf [1.295270712s]
Jul 10 11:52:02.773: INFO: Created: latency-svc-fbfmz
Jul 10 11:52:02.842: INFO: Got endpoints: latency-svc-fbfmz [1.351853519s]
Jul 10 11:52:02.852: INFO: Created: latency-svc-6mdlr
Jul 10 11:52:02.864: INFO: Got endpoints: latency-svc-6mdlr [1.237563179s]
Jul 10 11:52:02.901: INFO: Created: latency-svc-5cjcw
Jul 10 11:52:02.923: INFO: Got endpoints: latency-svc-5cjcw [1.288593926s]
Jul 10 11:52:03.004: INFO: Created: latency-svc-thkbd
Jul 10 11:52:03.007: INFO: Got endpoints: latency-svc-thkbd [1.28872713s]
Jul 10 11:52:03.037: INFO: Created: latency-svc-v5d7f
Jul 10 11:52:03.079: INFO: Got endpoints: latency-svc-v5d7f [1.254393634s]
Jul 10 11:52:03.100: INFO: Created: latency-svc-qk59r
Jul 10 11:52:03.207: INFO: Got endpoints: latency-svc-qk59r [1.368243654s]
Jul 10 11:52:03.223: INFO: Created: latency-svc-5qcqm
Jul 10 11:52:03.375: INFO: Got endpoints: latency-svc-5qcqm [1.493635262s]
Jul 10 11:52:03.377: INFO: Created: latency-svc-xml5c
Jul 10 11:52:03.409: INFO: Got endpoints: latency-svc-xml5c [1.436121567s]
Jul 10 11:52:03.434: INFO: Created: latency-svc-pvq8c
Jul 10 11:52:03.465: INFO: Got endpoints: latency-svc-pvq8c [1.443598519s]
Jul 10 11:52:03.531: INFO: Created: latency-svc-g8s8p
Jul 10 11:52:03.565: INFO: Got endpoints: latency-svc-g8s8p [1.420065312s]
Jul 10 11:52:03.621: INFO: Created: latency-svc-7lq89
Jul 10 11:52:03.698: INFO: Got endpoints: latency-svc-7lq89 [1.330480623s]
Jul 10 11:52:03.728: INFO: Created: latency-svc-sxwnj
Jul 10 11:52:03.746: INFO: Got endpoints: latency-svc-sxwnj [1.209806967s]
Jul 10 11:52:03.789: INFO: Created: latency-svc-zhxml
Jul 10 11:52:03.865: INFO: Got endpoints: latency-svc-zhxml [1.277406803s]
Jul 10 11:52:03.867: INFO: Created: latency-svc-6mm7n
Jul 10 11:52:03.879: INFO: Got endpoints: latency-svc-6mm7n [1.198331756s]
Jul 10 11:52:03.955: INFO: Created: latency-svc-m5q7d
Jul 10 11:52:04.045: INFO: Got endpoints: latency-svc-m5q7d [1.31274272s]
Jul 10 11:52:04.048: INFO: Created: latency-svc-bthg4
Jul 10 11:52:04.094: INFO: Got endpoints: latency-svc-bthg4 [1.251710415s]
Jul 10 11:52:04.094: INFO: Created: latency-svc-vxc8r
Jul 10 11:52:04.107: INFO: Got endpoints: latency-svc-vxc8r [1.243319653s]
Jul 10 11:52:04.138: INFO: Created: latency-svc-bmwrk
Jul 10 11:52:04.219: INFO: Got endpoints: latency-svc-bmwrk [1.296372318s]
Jul 10 11:52:04.221: INFO: Created: latency-svc-tn4vh
Jul 10 11:52:04.234: INFO: Got endpoints: latency-svc-tn4vh [1.226893879s]
Jul 10 11:52:04.261: INFO: Created: latency-svc-9jndm
Jul 10 11:52:04.275: INFO: Got endpoints: latency-svc-9jndm [1.196131726s]
Jul 10 11:52:04.300: INFO: Created: latency-svc-kx9sj
Jul 10 11:52:04.313: INFO: Got endpoints: latency-svc-kx9sj [1.105343091s]
Jul 10 11:52:04.381: INFO: Created: latency-svc-lg7q7
Jul 10 11:52:04.383: INFO: Got endpoints: latency-svc-lg7q7 [1.007998938s]
Jul 10 11:52:04.414: INFO: Created: latency-svc-hchxk
Jul 10 11:52:04.427: INFO: Got endpoints: latency-svc-hchxk [1.01818572s]
Jul 10 11:52:04.451: INFO: Created: latency-svc-pzfbr
Jul 10 11:52:04.463: INFO: Got endpoints: latency-svc-pzfbr [998.221936ms]
Jul 10 11:52:04.543: INFO: Created: latency-svc-gjlwg
Jul 10 11:52:04.545: INFO: Got endpoints: latency-svc-gjlwg [979.651594ms]
Jul 10 11:52:04.595: INFO: Created: latency-svc-plx9f
Jul 10 11:52:04.608: INFO: Got endpoints: latency-svc-plx9f [909.430037ms]
Jul 10 11:52:04.629: INFO: Created: latency-svc-59fms
Jul 10 11:52:04.692: INFO: Got endpoints: latency-svc-59fms [945.55307ms]
Jul 10 11:52:04.693: INFO: Created: latency-svc-x7gk8
Jul 10 11:52:04.698: INFO: Got endpoints: latency-svc-x7gk8 [832.44182ms]
Jul 10 11:52:04.755: INFO: Created: latency-svc-clgvg
Jul 10 11:52:04.771: INFO: Got endpoints: latency-svc-clgvg [891.992373ms]
Jul 10 11:52:04.848: INFO: Created: latency-svc-26q7h
Jul 10 11:52:04.850: INFO: Got endpoints: latency-svc-26q7h [804.929181ms]
Jul 10 11:52:04.875: INFO: Created: latency-svc-6frws
Jul 10 11:52:04.891: INFO: Got endpoints: latency-svc-6frws [797.573235ms]
Jul 10 11:52:04.891: INFO: Latencies: [574.32229ms 797.573235ms 804.929181ms 832.44182ms 891.992373ms 897.972265ms 903.966047ms 909.038862ms 909.430037ms 933.592077ms 945.55307ms 975.459756ms 979.651594ms 979.905096ms 987.106436ms 998.221936ms 999.267313ms 999.874467ms 1.001172234s 1.005302063s 1.005440863s 1.007998938s 1.009299879s 1.016968468s 1.017796138s 1.01818572s 1.023957716s 1.029313437s 1.029363068s 1.036090966s 1.036252162s 1.038614857s 1.038952161s 1.042903788s 1.045726912s 1.058918959s 1.071453463s 1.07774359s 1.099372651s 1.105343091s 1.142086031s 1.16718746s 1.173961961s 1.196131726s 1.198331756s 1.19883827s 1.206364717s 1.209806967s 1.220760505s 1.226893879s 1.237563179s 1.241197172s 1.243319653s 1.249022318s 1.251710415s 1.254393634s 1.254549576s 1.270889508s 1.277406803s 1.288593926s 1.28872713s 1.295270712s 1.296372318s 1.30509474s 1.31274272s 1.324636387s 1.330480623s 1.351853519s 1.359389913s 1.368243654s 1.418538188s 1.420065312s 1.436121567s 1.443598519s 1.493635262s 1.538358105s 1.562666032s 1.599641512s 1.610256893s 1.614933511s 1.614963002s 1.631263657s 1.688614092s 1.712771881s 1.75800962s 1.765015065s 1.794589656s 1.801945612s 1.807255967s 1.810374307s 1.814520718s 1.817011723s 1.817893874s 1.822861438s 1.830167093s 1.8421612s 1.846508816s 1.848099074s 1.865905134s 1.882444572s 1.910081097s 1.911734835s 1.919456983s 1.950429863s 2.000246279s 2.044688418s 2.05652658s 2.078860346s 2.11810431s 2.118509049s 2.118898171s 2.13219992s 2.172472886s 2.173695021s 2.175537895s 2.176273765s 2.21339843s 2.214915722s 2.222160189s 2.248953843s 2.297469567s 2.297822344s 2.31051532s 2.323058047s 2.407651365s 2.411606993s 2.41878089s 2.42490055s 2.427662386s 2.45446272s 2.458597971s 2.460653544s 2.472103022s 2.487414635s 2.513562231s 2.527334216s 2.572713275s 2.613158234s 2.682362082s 2.702229585s 2.706252876s 2.712428283s 2.743483728s 2.754133984s 2.760702869s 2.808901202s 2.886250313s 2.893426435s 2.899496337s 2.928903144s 3.064338269s 3.08283724s 3.11397496s 3.14388182s 3.176155989s 3.234224368s 3.271503688s 3.299542481s 3.300047073s 3.307987583s 3.311328611s 3.402870057s 3.410198759s 3.412727877s 3.539979632s 3.55087343s 3.553137242s 3.563512771s 3.575004955s 3.617730983s 3.663553765s 3.684908743s 3.710951377s 3.752385897s 3.765235805s 3.779067662s 3.855306611s 3.934066583s 3.947043s 4.019860657s 4.115777019s 4.169738602s 4.201776251s 4.226892263s 4.352487618s 4.459106344s 4.52097327s 5.313762356s 5.36279368s 5.449450442s 5.498330664s 5.945625592s 6.300929362s 6.692530364s 6.944061088s 6.974033093s 7.021762237s 7.137051067s 7.192627993s 7.45809188s]
Jul 10 11:52:04.892: INFO: 50 %ile: 1.910081097s
Jul 10 11:52:04.892: INFO: 90 %ile: 4.115777019s
Jul 10 11:52:04.892: INFO: 99 %ile: 7.192627993s
Jul 10 11:52:04.892: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:52:04.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-6hgh9" for this suite.
Jul 10 11:52:43.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:52:43.056: INFO: namespace: e2e-tests-svc-latency-6hgh9, resource: bindings, ignored listing per whitelist
Jul 10 11:52:43.085: INFO: namespace e2e-tests-svc-latency-6hgh9 deletion completed in 38.162540198s

• [SLOW TEST:78.520 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:52:43.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:52:49.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-j8vrv" for this suite.
Jul 10 11:53:29.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:53:29.712: INFO: namespace: e2e-tests-kubelet-test-j8vrv, resource: bindings, ignored listing per whitelist
Jul 10 11:53:29.734: INFO: namespace e2e-tests-kubelet-test-j8vrv deletion completed in 40.115535776s

• [SLOW TEST:46.647 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:53:29.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-f8d0a234-c2a3-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume secrets
Jul 10 11:53:29.842: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f8d4cb56-c2a3-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-rn4xl" to be "success or failure"
Jul 10 11:53:29.845: INFO: Pod "pod-projected-secrets-f8d4cb56-c2a3-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.852319ms
Jul 10 11:53:31.849: INFO: Pod "pod-projected-secrets-f8d4cb56-c2a3-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007015254s
Jul 10 11:53:33.853: INFO: Pod "pod-projected-secrets-f8d4cb56-c2a3-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010323212s
STEP: Saw pod success
Jul 10 11:53:33.853: INFO: Pod "pod-projected-secrets-f8d4cb56-c2a3-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 11:53:33.855: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-f8d4cb56-c2a3-11ea-a406-0242ac11000f container secret-volume-test: 
STEP: delete the pod
Jul 10 11:53:33.914: INFO: Waiting for pod pod-projected-secrets-f8d4cb56-c2a3-11ea-a406-0242ac11000f to disappear
Jul 10 11:53:34.072: INFO: Pod pod-projected-secrets-f8d4cb56-c2a3-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:53:34.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rn4xl" for this suite.
Jul 10 11:53:40.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:53:40.363: INFO: namespace: e2e-tests-projected-rn4xl, resource: bindings, ignored listing per whitelist
Jul 10 11:53:40.388: INFO: namespace e2e-tests-projected-rn4xl deletion completed in 6.312738691s

• [SLOW TEST:10.654 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:53:40.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-ff3a7e70-c2a3-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume configMaps
Jul 10 11:53:40.594: INFO: Waiting up to 5m0s for pod "pod-configmaps-ff3afed5-c2a3-11ea-a406-0242ac11000f" in namespace "e2e-tests-configmap-lksg4" to be "success or failure"
Jul 10 11:53:40.598: INFO: Pod "pod-configmaps-ff3afed5-c2a3-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.877176ms
Jul 10 11:53:42.664: INFO: Pod "pod-configmaps-ff3afed5-c2a3-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070616064s
Jul 10 11:53:44.667: INFO: Pod "pod-configmaps-ff3afed5-c2a3-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07379337s
STEP: Saw pod success
Jul 10 11:53:44.668: INFO: Pod "pod-configmaps-ff3afed5-c2a3-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 11:53:44.670: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-ff3afed5-c2a3-11ea-a406-0242ac11000f container configmap-volume-test: 
STEP: delete the pod
Jul 10 11:53:44.725: INFO: Waiting for pod pod-configmaps-ff3afed5-c2a3-11ea-a406-0242ac11000f to disappear
Jul 10 11:53:44.735: INFO: Pod pod-configmaps-ff3afed5-c2a3-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:53:44.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-lksg4" for this suite.
Jul 10 11:53:50.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:53:50.845: INFO: namespace: e2e-tests-configmap-lksg4, resource: bindings, ignored listing per whitelist
Jul 10 11:53:50.896: INFO: namespace e2e-tests-configmap-lksg4 deletion completed in 6.158354047s

• [SLOW TEST:10.508 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:53:50.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-dz92v
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-dz92v
STEP: Deleting pre-stop pod
Jul 10 11:54:08.432: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:54:08.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-dz92v" for this suite.
Jul 10 11:54:48.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:54:48.638: INFO: namespace: e2e-tests-prestop-dz92v, resource: bindings, ignored listing per whitelist
Jul 10 11:54:48.685: INFO: namespace e2e-tests-prestop-dz92v deletion completed in 40.241540847s

• [SLOW TEST:57.788 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:54:48.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jul 10 11:54:49.660: INFO: Waiting up to 5m0s for pod "client-containers-2867fc41-c2a4-11ea-a406-0242ac11000f" in namespace "e2e-tests-containers-h24lk" to be "success or failure"
Jul 10 11:54:49.701: INFO: Pod "client-containers-2867fc41-c2a4-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 41.531314ms
Jul 10 11:54:52.283: INFO: Pod "client-containers-2867fc41-c2a4-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.622888411s
Jul 10 11:54:54.285: INFO: Pod "client-containers-2867fc41-c2a4-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.625463271s
Jul 10 11:54:56.290: INFO: Pod "client-containers-2867fc41-c2a4-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.629907537s
Jul 10 11:54:58.294: INFO: Pod "client-containers-2867fc41-c2a4-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.634461813s
Jul 10 11:55:00.304: INFO: Pod "client-containers-2867fc41-c2a4-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.644496667s
STEP: Saw pod success
Jul 10 11:55:00.304: INFO: Pod "client-containers-2867fc41-c2a4-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 11:55:00.307: INFO: Trying to get logs from node hunter-worker pod client-containers-2867fc41-c2a4-11ea-a406-0242ac11000f container test-container: 
STEP: delete the pod
Jul 10 11:55:00.373: INFO: Waiting for pod client-containers-2867fc41-c2a4-11ea-a406-0242ac11000f to disappear
Jul 10 11:55:00.396: INFO: Pod client-containers-2867fc41-c2a4-11ea-a406-0242ac11000f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:55:00.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-h24lk" for this suite.
Jul 10 11:55:06.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:55:06.444: INFO: namespace: e2e-tests-containers-h24lk, resource: bindings, ignored listing per whitelist
Jul 10 11:55:06.492: INFO: namespace e2e-tests-containers-h24lk deletion completed in 6.092051841s

• [SLOW TEST:17.806 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:55:06.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-327c5ad2-c2a4-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume configMaps
Jul 10 11:55:06.583: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-327df1a4-c2a4-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-4fm42" to be "success or failure"
Jul 10 11:55:06.606: INFO: Pod "pod-projected-configmaps-327df1a4-c2a4-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.192832ms
Jul 10 11:55:08.617: INFO: Pod "pod-projected-configmaps-327df1a4-c2a4-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03401969s
Jul 10 11:55:10.621: INFO: Pod "pod-projected-configmaps-327df1a4-c2a4-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038000681s
Jul 10 11:55:12.627: INFO: Pod "pod-projected-configmaps-327df1a4-c2a4-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043888624s
STEP: Saw pod success
Jul 10 11:55:12.627: INFO: Pod "pod-projected-configmaps-327df1a4-c2a4-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 11:55:12.631: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-327df1a4-c2a4-11ea-a406-0242ac11000f container projected-configmap-volume-test: 
STEP: delete the pod
Jul 10 11:55:12.671: INFO: Waiting for pod pod-projected-configmaps-327df1a4-c2a4-11ea-a406-0242ac11000f to disappear
Jul 10 11:55:12.701: INFO: Pod pod-projected-configmaps-327df1a4-c2a4-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:55:12.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4fm42" for this suite.
Jul 10 11:55:18.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:55:18.765: INFO: namespace: e2e-tests-projected-4fm42, resource: bindings, ignored listing per whitelist
Jul 10 11:55:18.784: INFO: namespace e2e-tests-projected-4fm42 deletion completed in 6.079842931s

• [SLOW TEST:12.293 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:55:18.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0710 11:55:19.975855       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 10 11:55:19.975: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:55:19.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-shs6t" for this suite.
Jul 10 11:55:25.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:55:26.005: INFO: namespace: e2e-tests-gc-shs6t, resource: bindings, ignored listing per whitelist
Jul 10 11:55:26.058: INFO: namespace e2e-tests-gc-shs6t deletion completed in 6.079661558s

• [SLOW TEST:7.273 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:55:26.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jul 10 11:55:26.357: INFO: Waiting up to 5m0s for pod "downward-api-3e471f4c-c2a4-11ea-a406-0242ac11000f" in namespace "e2e-tests-downward-api-s4gdh" to be "success or failure"
Jul 10 11:55:26.360: INFO: Pod "downward-api-3e471f4c-c2a4-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.187982ms
Jul 10 11:55:28.364: INFO: Pod "downward-api-3e471f4c-c2a4-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006898088s
Jul 10 11:55:30.402: INFO: Pod "downward-api-3e471f4c-c2a4-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 4.045284383s
Jul 10 11:55:32.406: INFO: Pod "downward-api-3e471f4c-c2a4-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049154063s
STEP: Saw pod success
Jul 10 11:55:32.406: INFO: Pod "downward-api-3e471f4c-c2a4-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 11:55:32.409: INFO: Trying to get logs from node hunter-worker pod downward-api-3e471f4c-c2a4-11ea-a406-0242ac11000f container dapi-container: 
STEP: delete the pod
Jul 10 11:55:32.444: INFO: Waiting for pod downward-api-3e471f4c-c2a4-11ea-a406-0242ac11000f to disappear
Jul 10 11:55:32.457: INFO: Pod downward-api-3e471f4c-c2a4-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:55:32.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-s4gdh" for this suite.
Jul 10 11:55:38.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:55:38.525: INFO: namespace: e2e-tests-downward-api-s4gdh, resource: bindings, ignored listing per whitelist
Jul 10 11:55:38.580: INFO: namespace e2e-tests-downward-api-s4gdh deletion completed in 6.120473495s

• [SLOW TEST:12.522 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:55:38.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jul 10 11:55:38.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tfcl9'
Jul 10 11:55:41.354: INFO: stderr: ""
Jul 10 11:55:41.355: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jul 10 11:55:42.359: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 11:55:42.359: INFO: Found 0 / 1
Jul 10 11:55:43.476: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 11:55:43.476: INFO: Found 0 / 1
Jul 10 11:55:44.823: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 11:55:44.823: INFO: Found 0 / 1
Jul 10 11:55:45.359: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 11:55:45.359: INFO: Found 0 / 1
Jul 10 11:55:46.373: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 11:55:46.373: INFO: Found 0 / 1
Jul 10 11:55:47.359: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 11:55:47.359: INFO: Found 1 / 1
Jul 10 11:55:47.359: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jul 10 11:55:47.362: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 11:55:47.362: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul 10 11:55:47.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-66rtd --namespace=e2e-tests-kubectl-tfcl9 -p {"metadata":{"annotations":{"x":"y"}}}'
Jul 10 11:55:47.455: INFO: stderr: ""
Jul 10 11:55:47.455: INFO: stdout: "pod/redis-master-66rtd patched\n"
STEP: checking annotations
Jul 10 11:55:47.462: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 11:55:47.462: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:55:47.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tfcl9" for this suite.
Jul 10 11:56:09.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:56:09.528: INFO: namespace: e2e-tests-kubectl-tfcl9, resource: bindings, ignored listing per whitelist
Jul 10 11:56:09.557: INFO: namespace e2e-tests-kubectl-tfcl9 deletion completed in 22.091701004s

• [SLOW TEST:30.977 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:56:09.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 10 11:56:09.957: INFO: Waiting up to 5m0s for pod "downwardapi-volume-582c234c-c2a4-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-hln5r" to be "success or failure"
Jul 10 11:56:09.997: INFO: Pod "downwardapi-volume-582c234c-c2a4-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 39.443628ms
Jul 10 11:56:12.697: INFO: Pod "downwardapi-volume-582c234c-c2a4-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.74003318s
Jul 10 11:56:14.701: INFO: Pod "downwardapi-volume-582c234c-c2a4-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.743923159s
Jul 10 11:56:16.781: INFO: Pod "downwardapi-volume-582c234c-c2a4-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 6.823218404s
Jul 10 11:56:18.784: INFO: Pod "downwardapi-volume-582c234c-c2a4-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.826721085s
STEP: Saw pod success
Jul 10 11:56:18.784: INFO: Pod "downwardapi-volume-582c234c-c2a4-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 11:56:18.787: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-582c234c-c2a4-11ea-a406-0242ac11000f container client-container: 
STEP: delete the pod
Jul 10 11:56:18.884: INFO: Waiting for pod downwardapi-volume-582c234c-c2a4-11ea-a406-0242ac11000f to disappear
Jul 10 11:56:18.895: INFO: Pod downwardapi-volume-582c234c-c2a4-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:56:18.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hln5r" for this suite.
Jul 10 11:56:24.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:56:24.978: INFO: namespace: e2e-tests-projected-hln5r, resource: bindings, ignored listing per whitelist
Jul 10 11:56:25.042: INFO: namespace e2e-tests-projected-hln5r deletion completed in 6.144319595s

• [SLOW TEST:15.485 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:56:25.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-6150f885-c2a4-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume secrets
Jul 10 11:56:25.568: INFO: Waiting up to 5m0s for pod "pod-secrets-61516ea2-c2a4-11ea-a406-0242ac11000f" in namespace "e2e-tests-secrets-p4s64" to be "success or failure"
Jul 10 11:56:25.595: INFO: Pod "pod-secrets-61516ea2-c2a4-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 27.219886ms
Jul 10 11:56:27.775: INFO: Pod "pod-secrets-61516ea2-c2a4-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206616792s
Jul 10 11:56:29.979: INFO: Pod "pod-secrets-61516ea2-c2a4-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.41136335s
Jul 10 11:56:31.983: INFO: Pod "pod-secrets-61516ea2-c2a4-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.415177766s
STEP: Saw pod success
Jul 10 11:56:31.983: INFO: Pod "pod-secrets-61516ea2-c2a4-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 11:56:31.986: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-61516ea2-c2a4-11ea-a406-0242ac11000f container secret-volume-test: 
STEP: delete the pod
Jul 10 11:56:32.052: INFO: Waiting for pod pod-secrets-61516ea2-c2a4-11ea-a406-0242ac11000f to disappear
Jul 10 11:56:32.164: INFO: Pod pod-secrets-61516ea2-c2a4-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:56:32.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-p4s64" for this suite.
Jul 10 11:56:38.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:56:38.342: INFO: namespace: e2e-tests-secrets-p4s64, resource: bindings, ignored listing per whitelist
Jul 10 11:56:38.363: INFO: namespace e2e-tests-secrets-p4s64 deletion completed in 6.175011897s

• [SLOW TEST:13.320 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:56:38.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 10 11:56:38.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-82v8k'
Jul 10 11:56:38.847: INFO: stderr: ""
Jul 10 11:56:38.847: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Jul 10 11:56:38.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-82v8k'
Jul 10 11:56:47.605: INFO: stderr: ""
Jul 10 11:56:47.605: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:56:47.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-82v8k" for this suite.
Jul 10 11:56:55.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:56:55.673: INFO: namespace: e2e-tests-kubectl-82v8k, resource: bindings, ignored listing per whitelist
Jul 10 11:56:55.737: INFO: namespace e2e-tests-kubectl-82v8k deletion completed in 8.097126128s

• [SLOW TEST:17.373 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:56:55.737: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jul 10 11:56:55.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jul 10 11:56:55.970: INFO: stderr: ""
Jul 10 11:56:55.970: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 11:56:55.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-w4hfv" for this suite.
Jul 10 11:57:01.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 11:57:02.024: INFO: namespace: e2e-tests-kubectl-w4hfv, resource: bindings, ignored listing per whitelist
Jul 10 11:57:02.064: INFO: namespace e2e-tests-kubectl-w4hfv deletion completed in 6.089567683s

• [SLOW TEST:6.327 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 11:57:02.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-rl4zm
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Jul 10 11:57:02.170: INFO: Found 0 stateful pods, waiting for 3
Jul 10 11:57:12.214: INFO: Found 2 stateful pods, waiting for 3
Jul 10 11:57:22.394: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 10 11:57:22.394: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 10 11:57:22.394: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul 10 11:57:33.028: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 10 11:57:33.028: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 10 11:57:33.028: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jul 10 11:57:33.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rl4zm ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 10 11:57:33.954: INFO: stderr: "I0710 11:57:33.264372    2836 log.go:172] (0xc0001224d0) (0xc00072e640) Create stream\nI0710 11:57:33.264420    2836 log.go:172] (0xc0001224d0) (0xc00072e640) Stream added, broadcasting: 1\nI0710 11:57:33.267214    2836 log.go:172] (0xc0001224d0) Reply frame received for 1\nI0710 11:57:33.267262    2836 log.go:172] (0xc0001224d0) (0xc00069adc0) Create stream\nI0710 11:57:33.267286    2836 log.go:172] (0xc0001224d0) (0xc00069adc0) Stream added, broadcasting: 3\nI0710 11:57:33.268339    2836 log.go:172] (0xc0001224d0) Reply frame received for 3\nI0710 11:57:33.268373    2836 log.go:172] (0xc0001224d0) (0xc000312000) Create stream\nI0710 11:57:33.268384    2836 log.go:172] (0xc0001224d0) (0xc000312000) Stream added, broadcasting: 5\nI0710 11:57:33.269488    2836 log.go:172] (0xc0001224d0) Reply frame received for 5\nI0710 11:57:33.949269    2836 log.go:172] (0xc0001224d0) Data frame received for 3\nI0710 11:57:33.949289    2836 log.go:172] (0xc00069adc0) (3) Data frame handling\nI0710 11:57:33.949309    2836 log.go:172] (0xc00069adc0) (3) Data frame sent\nI0710 11:57:33.949314    2836 log.go:172] (0xc0001224d0) Data frame received for 3\nI0710 11:57:33.949320    2836 log.go:172] (0xc00069adc0) (3) Data frame handling\nI0710 11:57:33.949489    2836 log.go:172] (0xc0001224d0) Data frame received for 5\nI0710 11:57:33.949509    2836 log.go:172] (0xc000312000) (5) Data frame handling\nI0710 11:57:33.950886    2836 log.go:172] (0xc0001224d0) Data frame received for 1\nI0710 11:57:33.950920    2836 log.go:172] (0xc00072e640) (1) Data frame handling\nI0710 11:57:33.950944    2836 log.go:172] (0xc00072e640) (1) Data frame sent\nI0710 11:57:33.950964    2836 log.go:172] (0xc0001224d0) (0xc00072e640) Stream removed, broadcasting: 1\nI0710 11:57:33.951015    2836 log.go:172] (0xc0001224d0) Go away received\nI0710 11:57:33.951217    2836 log.go:172] (0xc0001224d0) (0xc00072e640) Stream removed, broadcasting: 1\nI0710 11:57:33.951239    2836 log.go:172] (0xc0001224d0) (0xc00069adc0) Stream removed, broadcasting: 3\nI0710 11:57:33.951249    2836 log.go:172] (0xc0001224d0) (0xc000312000) Stream removed, broadcasting: 5\n"
Jul 10 11:57:33.954: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 10 11:57:33.954: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jul 10 11:57:44.709: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jul 10 11:57:54.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rl4zm ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 10 11:57:55.102: INFO: stderr: "I0710 11:57:55.037581    2859 log.go:172] (0xc000138790) (0xc000702640) Create stream\nI0710 11:57:55.037647    2859 log.go:172] (0xc000138790) (0xc000702640) Stream added, broadcasting: 1\nI0710 11:57:55.039671    2859 log.go:172] (0xc000138790) Reply frame received for 1\nI0710 11:57:55.039703    2859 log.go:172] (0xc000138790) (0xc0007026e0) Create stream\nI0710 11:57:55.039712    2859 log.go:172] (0xc000138790) (0xc0007026e0) Stream added, broadcasting: 3\nI0710 11:57:55.040418    2859 log.go:172] (0xc000138790) Reply frame received for 3\nI0710 11:57:55.040446    2859 log.go:172] (0xc000138790) (0xc0005acc80) Create stream\nI0710 11:57:55.040456    2859 log.go:172] (0xc000138790) (0xc0005acc80) Stream added, broadcasting: 5\nI0710 11:57:55.041309    2859 log.go:172] (0xc000138790) Reply frame received for 5\nI0710 11:57:55.098382    2859 log.go:172] (0xc000138790) Data frame received for 5\nI0710 11:57:55.098423    2859 log.go:172] (0xc0005acc80) (5) Data frame handling\nI0710 11:57:55.098457    2859 log.go:172] (0xc000138790) Data frame received for 3\nI0710 11:57:55.098505    2859 log.go:172] (0xc0007026e0) (3) Data frame handling\nI0710 11:57:55.098532    2859 log.go:172] (0xc0007026e0) (3) Data frame sent\nI0710 11:57:55.098549    2859 log.go:172] (0xc000138790) Data frame received for 3\nI0710 11:57:55.098556    2859 log.go:172] (0xc0007026e0) (3) Data frame handling\nI0710 11:57:55.099913    2859 log.go:172] (0xc000138790) Data frame received for 1\nI0710 11:57:55.099973    2859 log.go:172] (0xc000702640) (1) Data frame handling\nI0710 11:57:55.099998    2859 log.go:172] (0xc000702640) (1) Data frame sent\nI0710 11:57:55.100020    2859 log.go:172] (0xc000138790) (0xc000702640) Stream removed, broadcasting: 1\nI0710 11:57:55.100095    2859 log.go:172] (0xc000138790) Go away received\nI0710 11:57:55.100283    2859 log.go:172] (0xc000138790) (0xc000702640) Stream removed, broadcasting: 1\nI0710 11:57:55.100302    2859 log.go:172] (0xc000138790) (0xc0007026e0) Stream removed, broadcasting: 3\nI0710 11:57:55.100313    2859 log.go:172] (0xc000138790) (0xc0005acc80) Stream removed, broadcasting: 5\n"
Jul 10 11:57:55.102: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 10 11:57:55.102: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 10 11:58:05.120: INFO: Waiting for StatefulSet e2e-tests-statefulset-rl4zm/ss2 to complete update
Jul 10 11:58:05.120: INFO: Waiting for Pod e2e-tests-statefulset-rl4zm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 10 11:58:05.120: INFO: Waiting for Pod e2e-tests-statefulset-rl4zm/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 10 11:58:05.120: INFO: Waiting for Pod e2e-tests-statefulset-rl4zm/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 10 11:58:15.194: INFO: Waiting for StatefulSet e2e-tests-statefulset-rl4zm/ss2 to complete update
Jul 10 11:58:15.194: INFO: Waiting for Pod e2e-tests-statefulset-rl4zm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 10 11:58:15.194: INFO: Waiting for Pod e2e-tests-statefulset-rl4zm/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 10 11:58:25.124: INFO: Waiting for StatefulSet e2e-tests-statefulset-rl4zm/ss2 to complete update
Jul 10 11:58:25.124: INFO: Waiting for Pod e2e-tests-statefulset-rl4zm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 10 11:58:25.124: INFO: Waiting for Pod e2e-tests-statefulset-rl4zm/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 10 11:58:35.128: INFO: Waiting for StatefulSet e2e-tests-statefulset-rl4zm/ss2 to complete update
Jul 10 11:58:35.128: INFO: Waiting for Pod e2e-tests-statefulset-rl4zm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 10 11:58:35.128: INFO: Waiting for Pod e2e-tests-statefulset-rl4zm/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 10 11:58:45.128: INFO: Waiting for StatefulSet e2e-tests-statefulset-rl4zm/ss2 to complete update
Jul 10 11:58:45.128: INFO: Waiting for Pod e2e-tests-statefulset-rl4zm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 10 11:58:55.128: INFO: Waiting for StatefulSet e2e-tests-statefulset-rl4zm/ss2 to complete update
Jul 10 11:58:55.128: INFO: Waiting for Pod e2e-tests-statefulset-rl4zm/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Jul 10 11:59:05.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rl4zm ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 10 11:59:05.724: INFO: stderr: "I0710 11:59:05.236119    2881 log.go:172] (0xc000138840) (0xc000605400) Create stream\nI0710 11:59:05.236173    2881 log.go:172] (0xc000138840) (0xc000605400) Stream added, broadcasting: 1\nI0710 11:59:05.238671    2881 log.go:172] (0xc000138840) Reply frame received for 1\nI0710 11:59:05.238717    2881 log.go:172] (0xc000138840) (0xc0006054a0) Create stream\nI0710 11:59:05.238745    2881 log.go:172] (0xc000138840) (0xc0006054a0) Stream added, broadcasting: 3\nI0710 11:59:05.239434    2881 log.go:172] (0xc000138840) Reply frame received for 3\nI0710 11:59:05.239464    2881 log.go:172] (0xc000138840) (0xc000605540) Create stream\nI0710 11:59:05.239478    2881 log.go:172] (0xc000138840) (0xc000605540) Stream added, broadcasting: 5\nI0710 11:59:05.240201    2881 log.go:172] (0xc000138840) Reply frame received for 5\nI0710 11:59:05.719409    2881 log.go:172] (0xc000138840) Data frame received for 3\nI0710 11:59:05.719430    2881 log.go:172] (0xc0006054a0) (3) Data frame handling\nI0710 11:59:05.719444    2881 log.go:172] (0xc0006054a0) (3) Data frame sent\nI0710 11:59:05.720288    2881 log.go:172] (0xc000138840) Data frame received for 5\nI0710 11:59:05.720310    2881 log.go:172] (0xc000605540) (5) Data frame handling\nI0710 11:59:05.720326    2881 log.go:172] (0xc000138840) Data frame received for 3\nI0710 11:59:05.720351    2881 log.go:172] (0xc0006054a0) (3) Data frame handling\nI0710 11:59:05.722123    2881 log.go:172] (0xc000138840) Data frame received for 1\nI0710 11:59:05.722138    2881 log.go:172] (0xc000605400) (1) Data frame handling\nI0710 11:59:05.722147    2881 log.go:172] (0xc000605400) (1) Data frame sent\nI0710 11:59:05.722157    2881 log.go:172] (0xc000138840) (0xc000605400) Stream removed, broadcasting: 1\nI0710 11:59:05.722167    2881 log.go:172] (0xc000138840) Go away received\nI0710 11:59:05.722335    2881 log.go:172] (0xc000138840) (0xc000605400) Stream removed, broadcasting: 1\nI0710 11:59:05.722360    2881 log.go:172] (0xc000138840) (0xc0006054a0) Stream removed, broadcasting: 3\nI0710 11:59:05.722371    2881 log.go:172] (0xc000138840) (0xc000605540) Stream removed, broadcasting: 5\n"
Jul 10 11:59:05.724: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 10 11:59:05.724: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 10 11:59:15.819: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jul 10 11:59:25.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-rl4zm ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 10 11:59:26.114: INFO: stderr: "I0710 11:59:26.046181    2903 log.go:172] (0xc00015c8f0) (0xc000768640) Create stream\nI0710 11:59:26.046262    2903 log.go:172] (0xc00015c8f0) (0xc000768640) Stream added, broadcasting: 1\nI0710 11:59:26.048688    2903 log.go:172] (0xc00015c8f0) Reply frame received for 1\nI0710 11:59:26.048840    2903 log.go:172] (0xc00015c8f0) (0xc000688e60) Create stream\nI0710 11:59:26.048868    2903 log.go:172] (0xc00015c8f0) (0xc000688e60) Stream added, broadcasting: 3\nI0710 11:59:26.049866    2903 log.go:172] (0xc00015c8f0) Reply frame received for 3\nI0710 11:59:26.049905    2903 log.go:172] (0xc00015c8f0) (0xc0007686e0) Create stream\nI0710 11:59:26.049921    2903 log.go:172] (0xc00015c8f0) (0xc0007686e0) Stream added, broadcasting: 5\nI0710 11:59:26.050780    2903 log.go:172] (0xc00015c8f0) Reply frame received for 5\nI0710 11:59:26.109998    2903 log.go:172] (0xc00015c8f0) Data frame received for 5\nI0710 11:59:26.110025    2903 log.go:172] (0xc0007686e0) (5) Data frame handling\nI0710 11:59:26.110058    2903 log.go:172] (0xc00015c8f0) Data frame received for 3\nI0710 11:59:26.110080    2903 log.go:172] (0xc000688e60) (3) Data frame handling\nI0710 11:59:26.110086    2903 log.go:172] (0xc000688e60) (3) Data frame sent\nI0710 11:59:26.110093    2903 log.go:172] (0xc00015c8f0) Data frame received for 3\nI0710 11:59:26.110101    2903 log.go:172] (0xc000688e60) (3) Data frame handling\nI0710 11:59:26.111382    2903 log.go:172] (0xc00015c8f0) Data frame received for 1\nI0710 11:59:26.111406    2903 log.go:172] (0xc000768640) (1) Data frame handling\nI0710 11:59:26.111423    2903 log.go:172] (0xc000768640) (1) Data frame sent\nI0710 11:59:26.111438    2903 log.go:172] (0xc00015c8f0) (0xc000768640) Stream removed, broadcasting: 1\nI0710 11:59:26.111507    2903 log.go:172] (0xc00015c8f0) Go away received\nI0710 11:59:26.111687    2903 log.go:172] (0xc00015c8f0) (0xc000768640) Stream removed, broadcasting: 1\nI0710 11:59:26.111705    2903 log.go:172] (0xc00015c8f0) (0xc000688e60) Stream removed, broadcasting: 3\nI0710 11:59:26.111718    2903 log.go:172] (0xc00015c8f0) (0xc0007686e0) Stream removed, broadcasting: 5\n"
Jul 10 11:59:26.114: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 10 11:59:26.114: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 10 11:59:38.656: INFO: Waiting for StatefulSet e2e-tests-statefulset-rl4zm/ss2 to complete update
Jul 10 11:59:38.656: INFO: Waiting for Pod e2e-tests-statefulset-rl4zm/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jul 10 11:59:38.656: INFO: Waiting for Pod e2e-tests-statefulset-rl4zm/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jul 10 11:59:48.665: INFO: Waiting for StatefulSet e2e-tests-statefulset-rl4zm/ss2 to complete update
Jul 10 11:59:48.665: INFO: Waiting for Pod e2e-tests-statefulset-rl4zm/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jul 10 11:59:58.667: INFO: Waiting for StatefulSet e2e-tests-statefulset-rl4zm/ss2 to complete update
Jul 10 11:59:58.667: INFO: Waiting for Pod e2e-tests-statefulset-rl4zm/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jul 10 12:00:08.664: INFO: Waiting for StatefulSet e2e-tests-statefulset-rl4zm/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul 10 12:00:19.552: INFO: Deleting all statefulset in ns e2e-tests-statefulset-rl4zm
Jul 10 12:00:19.748: INFO: Scaling statefulset ss2 to 0
Jul 10 12:00:50.000: INFO: Waiting for statefulset status.replicas updated to 0
Jul 10 12:00:50.002: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:00:50.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-rl4zm" for this suite.
Jul 10 12:01:04.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:01:04.145: INFO: namespace: e2e-tests-statefulset-rl4zm, resource: bindings, ignored listing per whitelist
Jul 10 12:01:04.177: INFO: namespace e2e-tests-statefulset-rl4zm deletion completed in 14.110558961s

• [SLOW TEST:242.113 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:01:04.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0710 12:01:15.318861       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 10 12:01:15.318: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:01:15.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-t7xrg" for this suite.
Jul 10 12:01:31.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:01:31.422: INFO: namespace: e2e-tests-gc-t7xrg, resource: bindings, ignored listing per whitelist
Jul 10 12:01:31.476: INFO: namespace e2e-tests-gc-t7xrg deletion completed in 16.107801071s

• [SLOW TEST:27.298 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:01:31.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:02:19.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-fbh7b" for this suite.
Jul 10 12:02:27.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:02:27.561: INFO: namespace: e2e-tests-container-runtime-fbh7b, resource: bindings, ignored listing per whitelist
Jul 10 12:02:27.575: INFO: namespace e2e-tests-container-runtime-fbh7b deletion completed in 8.333457238s

• [SLOW TEST:56.099 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:02:27.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-39852e05-c2a5-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume configMaps
Jul 10 12:02:28.534: INFO: Waiting up to 5m0s for pod "pod-configmaps-39b3c5d6-c2a5-11ea-a406-0242ac11000f" in namespace "e2e-tests-configmap-mtm5n" to be "success or failure"
Jul 10 12:02:28.623: INFO: Pod "pod-configmaps-39b3c5d6-c2a5-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 88.386149ms
Jul 10 12:02:30.769: INFO: Pod "pod-configmaps-39b3c5d6-c2a5-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234450274s
Jul 10 12:02:32.773: INFO: Pod "pod-configmaps-39b3c5d6-c2a5-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.238770881s
Jul 10 12:02:35.134: INFO: Pod "pod-configmaps-39b3c5d6-c2a5-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.599667345s
Jul 10 12:02:37.138: INFO: Pod "pod-configmaps-39b3c5d6-c2a5-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.603440031s
Jul 10 12:02:39.141: INFO: Pod "pod-configmaps-39b3c5d6-c2a5-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.606743138s
STEP: Saw pod success
Jul 10 12:02:39.141: INFO: Pod "pod-configmaps-39b3c5d6-c2a5-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:02:39.146: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-39b3c5d6-c2a5-11ea-a406-0242ac11000f container configmap-volume-test: 
STEP: delete the pod
Jul 10 12:02:39.601: INFO: Waiting for pod pod-configmaps-39b3c5d6-c2a5-11ea-a406-0242ac11000f to disappear
Jul 10 12:02:39.643: INFO: Pod pod-configmaps-39b3c5d6-c2a5-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:02:39.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-mtm5n" for this suite.
Jul 10 12:02:46.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:02:46.328: INFO: namespace: e2e-tests-configmap-mtm5n, resource: bindings, ignored listing per whitelist
Jul 10 12:02:46.354: INFO: namespace e2e-tests-configmap-mtm5n deletion completed in 6.707814776s

• [SLOW TEST:18.779 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:02:46.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul 10 12:02:46.733: INFO: Waiting up to 5m0s for pod "pod-44bc28b9-c2a5-11ea-a406-0242ac11000f" in namespace "e2e-tests-emptydir-kfn6x" to be "success or failure"
Jul 10 12:02:46.773: INFO: Pod "pod-44bc28b9-c2a5-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 39.837298ms
Jul 10 12:02:49.284: INFO: Pod "pod-44bc28b9-c2a5-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.550616624s
Jul 10 12:02:51.289: INFO: Pod "pod-44bc28b9-c2a5-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.555739248s
Jul 10 12:02:53.292: INFO: Pod "pod-44bc28b9-c2a5-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.558760814s
STEP: Saw pod success
Jul 10 12:02:53.292: INFO: Pod "pod-44bc28b9-c2a5-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:02:53.294: INFO: Trying to get logs from node hunter-worker pod pod-44bc28b9-c2a5-11ea-a406-0242ac11000f container test-container: 
STEP: delete the pod
Jul 10 12:02:53.490: INFO: Waiting for pod pod-44bc28b9-c2a5-11ea-a406-0242ac11000f to disappear
Jul 10 12:02:53.506: INFO: Pod pod-44bc28b9-c2a5-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:02:53.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-kfn6x" for this suite.
Jul 10 12:02:59.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:02:59.666: INFO: namespace: e2e-tests-emptydir-kfn6x, resource: bindings, ignored listing per whitelist
Jul 10 12:02:59.673: INFO: namespace e2e-tests-emptydir-kfn6x deletion completed in 6.164248266s

• [SLOW TEST:13.319 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:02:59.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 10 12:02:59.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-c4wsv'
Jul 10 12:02:59.887: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 10 12:02:59.887: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Jul 10 12:03:01.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-c4wsv'
Jul 10 12:03:02.064: INFO: stderr: ""
Jul 10 12:03:02.065: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:03:02.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-c4wsv" for this suite.
Jul 10 12:03:24.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:03:24.211: INFO: namespace: e2e-tests-kubectl-c4wsv, resource: bindings, ignored listing per whitelist
Jul 10 12:03:24.232: INFO: namespace e2e-tests-kubectl-c4wsv deletion completed in 22.162730042s

• [SLOW TEST:24.558 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:03:24.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jul 10 12:03:24.335: INFO: Waiting up to 5m0s for pod "client-containers-5b2d0fd5-c2a5-11ea-a406-0242ac11000f" in namespace "e2e-tests-containers-tpmlj" to be "success or failure"
Jul 10 12:03:24.352: INFO: Pod "client-containers-5b2d0fd5-c2a5-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.318121ms
Jul 10 12:03:26.356: INFO: Pod "client-containers-5b2d0fd5-c2a5-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020926053s
Jul 10 12:03:28.360: INFO: Pod "client-containers-5b2d0fd5-c2a5-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024711238s
Jul 10 12:03:30.364: INFO: Pod "client-containers-5b2d0fd5-c2a5-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028680996s
STEP: Saw pod success
Jul 10 12:03:30.364: INFO: Pod "client-containers-5b2d0fd5-c2a5-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:03:30.367: INFO: Trying to get logs from node hunter-worker pod client-containers-5b2d0fd5-c2a5-11ea-a406-0242ac11000f container test-container: 
STEP: delete the pod
Jul 10 12:03:30.482: INFO: Waiting for pod client-containers-5b2d0fd5-c2a5-11ea-a406-0242ac11000f to disappear
Jul 10 12:03:30.537: INFO: Pod client-containers-5b2d0fd5-c2a5-11ea-a406-0242ac11000f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:03:30.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-tpmlj" for this suite.
Jul 10 12:03:36.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:03:36.702: INFO: namespace: e2e-tests-containers-tpmlj, resource: bindings, ignored listing per whitelist
Jul 10 12:03:36.706: INFO: namespace e2e-tests-containers-tpmlj deletion completed in 6.166086375s

• [SLOW TEST:12.474 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:03:36.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 10 12:03:36.838: INFO: Waiting up to 5m0s for pod "pod-629f62db-c2a5-11ea-a406-0242ac11000f" in namespace "e2e-tests-emptydir-2m5kr" to be "success or failure"
Jul 10 12:03:36.870: INFO: Pod "pod-629f62db-c2a5-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 31.633946ms
Jul 10 12:03:38.874: INFO: Pod "pod-629f62db-c2a5-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035795573s
Jul 10 12:03:40.878: INFO: Pod "pod-629f62db-c2a5-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039446794s
STEP: Saw pod success
Jul 10 12:03:40.878: INFO: Pod "pod-629f62db-c2a5-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:03:40.881: INFO: Trying to get logs from node hunter-worker2 pod pod-629f62db-c2a5-11ea-a406-0242ac11000f container test-container: 
STEP: delete the pod
Jul 10 12:03:40.901: INFO: Waiting for pod pod-629f62db-c2a5-11ea-a406-0242ac11000f to disappear
Jul 10 12:03:40.905: INFO: Pod pod-629f62db-c2a5-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:03:40.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-2m5kr" for this suite.
Jul 10 12:03:46.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:03:46.935: INFO: namespace: e2e-tests-emptydir-2m5kr, resource: bindings, ignored listing per whitelist
Jul 10 12:03:47.025: INFO: namespace e2e-tests-emptydir-2m5kr deletion completed in 6.115805554s

• [SLOW TEST:10.318 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:03:47.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Jul 10 12:03:51.199: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-68c6e3e7-c2a5-11ea-a406-0242ac11000f", GenerateName:"", Namespace:"e2e-tests-pods-rt2rm", SelfLink:"/api/v1/namespaces/e2e-tests-pods-rt2rm/pods/pod-submit-remove-68c6e3e7-c2a5-11ea-a406-0242ac11000f", UID:"68c9404f-c2a5-11ea-b2c9-0242ac120008", ResourceVersion:"20698", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729979427, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"147883625"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-zlrpz", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002593840), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zlrpz", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00161a018), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0025caba0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00161a060)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00161a080)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00161a088), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00161a08c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729979427, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729979430, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729979430, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729979427, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.4", PodIP:"10.244.2.141", StartTime:(*v1.Time)(0xc002412ac0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002412ae0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://5303764a3dcaa154ce5d21e56a8b8c245c29a8887f7047dfd2b736d70cf09f83"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:03:57.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-rt2rm" for this suite.
Jul 10 12:04:03.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:04:03.644: INFO: namespace: e2e-tests-pods-rt2rm, resource: bindings, ignored listing per whitelist
Jul 10 12:04:03.677: INFO: namespace e2e-tests-pods-rt2rm deletion completed in 6.107551425s

• [SLOW TEST:16.653 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:04:03.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jul 10 12:04:10.527: INFO: Successfully updated pod "annotationupdate72bab576-c2a5-11ea-a406-0242ac11000f"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:04:12.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5n2d8" for this suite.
Jul 10 12:04:34.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:04:34.745: INFO: namespace: e2e-tests-downward-api-5n2d8, resource: bindings, ignored listing per whitelist
Jul 10 12:04:34.872: INFO: namespace e2e-tests-downward-api-5n2d8 deletion completed in 22.193275904s

• [SLOW TEST:31.195 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:04:34.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jul 10 12:04:43.044: INFO: Pod pod-hostip-854f183e-c2a5-11ea-a406-0242ac11000f has hostIP: 172.18.0.4
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:04:43.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-hnc49" for this suite.
Jul 10 12:05:05.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:05:05.420: INFO: namespace: e2e-tests-pods-hnc49, resource: bindings, ignored listing per whitelist
Jul 10 12:05:05.459: INFO: namespace e2e-tests-pods-hnc49 deletion completed in 22.411587378s

• [SLOW TEST:30.586 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:05:05.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 10 12:05:05.536: INFO: Waiting up to 5m0s for pod "downwardapi-volume-977d7ea6-c2a5-11ea-a406-0242ac11000f" in namespace "e2e-tests-downward-api-hlz96" to be "success or failure"
Jul 10 12:05:05.548: INFO: Pod "downwardapi-volume-977d7ea6-c2a5-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.97686ms
Jul 10 12:05:07.552: INFO: Pod "downwardapi-volume-977d7ea6-c2a5-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015542586s
Jul 10 12:05:09.556: INFO: Pod "downwardapi-volume-977d7ea6-c2a5-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019529063s
STEP: Saw pod success
Jul 10 12:05:09.556: INFO: Pod "downwardapi-volume-977d7ea6-c2a5-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:05:09.558: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-977d7ea6-c2a5-11ea-a406-0242ac11000f container client-container: 
STEP: delete the pod
Jul 10 12:05:09.609: INFO: Waiting for pod downwardapi-volume-977d7ea6-c2a5-11ea-a406-0242ac11000f to disappear
Jul 10 12:05:09.620: INFO: Pod downwardapi-volume-977d7ea6-c2a5-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:05:09.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hlz96" for this suite.
Jul 10 12:05:15.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:05:15.733: INFO: namespace: e2e-tests-downward-api-hlz96, resource: bindings, ignored listing per whitelist
Jul 10 12:05:15.761: INFO: namespace e2e-tests-downward-api-hlz96 deletion completed in 6.138437014s

• [SLOW TEST:10.302 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:05:15.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-vddm
STEP: Creating a pod to test atomic-volume-subpath
Jul 10 12:05:16.323: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-vddm" in namespace "e2e-tests-subpath-7c86r" to be "success or failure"
Jul 10 12:05:16.495: INFO: Pod "pod-subpath-test-secret-vddm": Phase="Pending", Reason="", readiness=false. Elapsed: 171.868581ms
Jul 10 12:05:18.498: INFO: Pod "pod-subpath-test-secret-vddm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174943734s
Jul 10 12:05:20.680: INFO: Pod "pod-subpath-test-secret-vddm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.356694476s
Jul 10 12:05:22.685: INFO: Pod "pod-subpath-test-secret-vddm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.361260479s
Jul 10 12:05:24.688: INFO: Pod "pod-subpath-test-secret-vddm": Phase="Running", Reason="", readiness=false. Elapsed: 8.364903406s
Jul 10 12:05:26.692: INFO: Pod "pod-subpath-test-secret-vddm": Phase="Running", Reason="", readiness=false. Elapsed: 10.36917247s
Jul 10 12:05:28.696: INFO: Pod "pod-subpath-test-secret-vddm": Phase="Running", Reason="", readiness=false. Elapsed: 12.372632997s
Jul 10 12:05:30.700: INFO: Pod "pod-subpath-test-secret-vddm": Phase="Running", Reason="", readiness=false. Elapsed: 14.376499051s
Jul 10 12:05:32.703: INFO: Pod "pod-subpath-test-secret-vddm": Phase="Running", Reason="", readiness=false. Elapsed: 16.379587758s
Jul 10 12:05:34.707: INFO: Pod "pod-subpath-test-secret-vddm": Phase="Running", Reason="", readiness=false. Elapsed: 18.383914287s
Jul 10 12:05:36.712: INFO: Pod "pod-subpath-test-secret-vddm": Phase="Running", Reason="", readiness=false. Elapsed: 20.388228242s
Jul 10 12:05:38.716: INFO: Pod "pod-subpath-test-secret-vddm": Phase="Running", Reason="", readiness=false. Elapsed: 22.392628064s
Jul 10 12:05:40.720: INFO: Pod "pod-subpath-test-secret-vddm": Phase="Running", Reason="", readiness=false. Elapsed: 24.396540671s
Jul 10 12:05:42.730: INFO: Pod "pod-subpath-test-secret-vddm": Phase="Running", Reason="", readiness=false. Elapsed: 26.406559389s
Jul 10 12:05:44.734: INFO: Pod "pod-subpath-test-secret-vddm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.411184044s
STEP: Saw pod success
Jul 10 12:05:44.735: INFO: Pod "pod-subpath-test-secret-vddm" satisfied condition "success or failure"
Jul 10 12:05:44.738: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-secret-vddm container test-container-subpath-secret-vddm: 
STEP: delete the pod
Jul 10 12:05:44.772: INFO: Waiting for pod pod-subpath-test-secret-vddm to disappear
Jul 10 12:05:44.777: INFO: Pod pod-subpath-test-secret-vddm no longer exists
STEP: Deleting pod pod-subpath-test-secret-vddm
Jul 10 12:05:44.777: INFO: Deleting pod "pod-subpath-test-secret-vddm" in namespace "e2e-tests-subpath-7c86r"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:05:44.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-7c86r" for this suite.
Jul 10 12:05:50.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:05:50.901: INFO: namespace: e2e-tests-subpath-7c86r, resource: bindings, ignored listing per whitelist
Jul 10 12:05:50.920: INFO: namespace e2e-tests-subpath-7c86r deletion completed in 6.136481021s

• [SLOW TEST:35.159 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:05:50.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-b29bb899-c2a5-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume configMaps
Jul 10 12:05:51.049: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b29e8d02-c2a5-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-zmbl7" to be "success or failure"
Jul 10 12:05:51.117: INFO: Pod "pod-projected-configmaps-b29e8d02-c2a5-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 68.693831ms
Jul 10 12:05:53.363: INFO: Pod "pod-projected-configmaps-b29e8d02-c2a5-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314521943s
Jul 10 12:05:55.367: INFO: Pod "pod-projected-configmaps-b29e8d02-c2a5-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.318201687s
STEP: Saw pod success
Jul 10 12:05:55.367: INFO: Pod "pod-projected-configmaps-b29e8d02-c2a5-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:05:55.370: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-b29e8d02-c2a5-11ea-a406-0242ac11000f container projected-configmap-volume-test: 
STEP: delete the pod
Jul 10 12:05:55.504: INFO: Waiting for pod pod-projected-configmaps-b29e8d02-c2a5-11ea-a406-0242ac11000f to disappear
Jul 10 12:05:55.521: INFO: Pod pod-projected-configmaps-b29e8d02-c2a5-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:05:55.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zmbl7" for this suite.
Jul 10 12:06:01.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:06:01.800: INFO: namespace: e2e-tests-projected-zmbl7, resource: bindings, ignored listing per whitelist
Jul 10 12:06:01.810: INFO: namespace e2e-tests-projected-zmbl7 deletion completed in 6.285140329s

• [SLOW TEST:10.890 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:06:01.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-b937cf31-c2a5-11ea-a406-0242ac11000f
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-b937cf31-c2a5-11ea-a406-0242ac11000f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:06:08.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-h2k5w" for this suite.
Jul 10 12:06:30.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:06:30.444: INFO: namespace: e2e-tests-configmap-h2k5w, resource: bindings, ignored listing per whitelist
Jul 10 12:06:30.472: INFO: namespace e2e-tests-configmap-h2k5w deletion completed in 22.143053338s

• [SLOW TEST:28.662 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:06:30.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jul 10 12:06:30.565: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 10 12:06:30.591: INFO: Waiting for terminating namespaces to be deleted...
Jul 10 12:06:30.594: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Jul 10 12:06:30.599: INFO: kindnet-2w5m4 from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded)
Jul 10 12:06:30.599: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 10 12:06:30.599: INFO: kube-proxy-8wnps from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded)
Jul 10 12:06:30.599: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 10 12:06:30.599: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Jul 10 12:06:30.604: INFO: kindnet-hpnvh from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded)
Jul 10 12:06:30.604: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 10 12:06:30.604: INFO: kube-proxy-b6f6s from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded)
Jul 10 12:06:30.604: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162062b35eeccebc], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:06:32.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-6dc5v" for this suite.
Jul 10 12:06:39.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:06:39.682: INFO: namespace: e2e-tests-sched-pred-6dc5v, resource: bindings, ignored listing per whitelist
Jul 10 12:06:39.762: INFO: namespace e2e-tests-sched-pred-6dc5v deletion completed in 6.169989379s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:9.289 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:06:39.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-cfb536ef-c2a5-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume secrets
Jul 10 12:06:40.219: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cfeee780-c2a5-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-zk4wh" to be "success or failure"
Jul 10 12:06:40.271: INFO: Pod "pod-projected-secrets-cfeee780-c2a5-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 51.717832ms
Jul 10 12:06:42.274: INFO: Pod "pod-projected-secrets-cfeee780-c2a5-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054802045s
Jul 10 12:06:44.280: INFO: Pod "pod-projected-secrets-cfeee780-c2a5-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 4.060547682s
Jul 10 12:06:46.304: INFO: Pod "pod-projected-secrets-cfeee780-c2a5-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.085271385s
STEP: Saw pod success
Jul 10 12:06:46.305: INFO: Pod "pod-projected-secrets-cfeee780-c2a5-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:06:46.307: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-cfeee780-c2a5-11ea-a406-0242ac11000f container projected-secret-volume-test: 
STEP: delete the pod
Jul 10 12:06:46.324: INFO: Waiting for pod pod-projected-secrets-cfeee780-c2a5-11ea-a406-0242ac11000f to disappear
Jul 10 12:06:46.347: INFO: Pod pod-projected-secrets-cfeee780-c2a5-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:06:46.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zk4wh" for this suite.
Jul 10 12:06:52.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:06:52.408: INFO: namespace: e2e-tests-projected-zk4wh, resource: bindings, ignored listing per whitelist
Jul 10 12:06:52.505: INFO: namespace e2e-tests-projected-zk4wh deletion completed in 6.155099874s

• [SLOW TEST:12.743 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:06:52.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:07:01.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-qft57" for this suite.
Jul 10 12:07:09.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:07:09.329: INFO: namespace: e2e-tests-kubelet-test-qft57, resource: bindings, ignored listing per whitelist
Jul 10 12:07:09.329: INFO: namespace e2e-tests-kubelet-test-qft57 deletion completed in 8.113638993s

• [SLOW TEST:16.823 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:07:09.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jul 10 12:07:09.464: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:09.467: INFO: Number of nodes with available pods: 0
Jul 10 12:07:09.467: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:07:10.473: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:10.476: INFO: Number of nodes with available pods: 0
Jul 10 12:07:10.476: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:07:11.472: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:11.475: INFO: Number of nodes with available pods: 0
Jul 10 12:07:11.475: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:07:12.472: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:12.475: INFO: Number of nodes with available pods: 0
Jul 10 12:07:12.475: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:07:13.784: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:13.789: INFO: Number of nodes with available pods: 1
Jul 10 12:07:13.789: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:07:14.530: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:14.541: INFO: Number of nodes with available pods: 2
Jul 10 12:07:14.541: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jul 10 12:07:14.611: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:14.614: INFO: Number of nodes with available pods: 1
Jul 10 12:07:14.614: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:07:15.618: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:15.622: INFO: Number of nodes with available pods: 1
Jul 10 12:07:15.622: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:07:16.619: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:16.622: INFO: Number of nodes with available pods: 1
Jul 10 12:07:16.622: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:07:17.619: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:17.622: INFO: Number of nodes with available pods: 1
Jul 10 12:07:17.622: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:07:18.620: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:18.623: INFO: Number of nodes with available pods: 1
Jul 10 12:07:18.623: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:07:19.619: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:19.623: INFO: Number of nodes with available pods: 1
Jul 10 12:07:19.623: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:07:20.618: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:20.621: INFO: Number of nodes with available pods: 1
Jul 10 12:07:20.621: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:07:21.620: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:21.624: INFO: Number of nodes with available pods: 1
Jul 10 12:07:21.624: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:07:22.760: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:22.996: INFO: Number of nodes with available pods: 1
Jul 10 12:07:22.996: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:07:23.619: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:23.622: INFO: Number of nodes with available pods: 1
Jul 10 12:07:23.622: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:07:24.664: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:24.668: INFO: Number of nodes with available pods: 1
Jul 10 12:07:24.668: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:07:25.619: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:25.622: INFO: Number of nodes with available pods: 1
Jul 10 12:07:25.623: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:07:26.619: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:26.623: INFO: Number of nodes with available pods: 1
Jul 10 12:07:26.623: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:07:27.637: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:27.641: INFO: Number of nodes with available pods: 1
Jul 10 12:07:27.641: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:07:28.621: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:28.623: INFO: Number of nodes with available pods: 1
Jul 10 12:07:28.624: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:07:29.619: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:29.623: INFO: Number of nodes with available pods: 1
Jul 10 12:07:29.623: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:07:30.619: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:07:30.622: INFO: Number of nodes with available pods: 2
Jul 10 12:07:30.622: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-5xf86, will wait for the garbage collector to delete the pods
Jul 10 12:07:30.685: INFO: Deleting DaemonSet.extensions daemon-set took: 5.843534ms
Jul 10 12:07:30.785: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.276104ms
Jul 10 12:07:38.811: INFO: Number of nodes with available pods: 0
Jul 10 12:07:38.811: INFO: Number of running nodes: 0, number of available pods: 0
Jul 10 12:07:38.814: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-5xf86/daemonsets","resourceVersion":"21431"},"items":null}

Jul 10 12:07:38.816: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-5xf86/pods","resourceVersion":"21431"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:07:38.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-5xf86" for this suite.
Jul 10 12:07:44.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:07:44.918: INFO: namespace: e2e-tests-daemonsets-5xf86, resource: bindings, ignored listing per whitelist
Jul 10 12:07:44.951: INFO: namespace e2e-tests-daemonsets-5xf86 deletion completed in 6.119165268s

• [SLOW TEST:35.622 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:07:44.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jul 10 12:07:45.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-znt7p'
Jul 10 12:07:51.933: INFO: stderr: ""
Jul 10 12:07:51.933: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 10 12:07:51.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-znt7p'
Jul 10 12:07:52.068: INFO: stderr: ""
Jul 10 12:07:52.068: INFO: stdout: "update-demo-nautilus-2l4z2 update-demo-nautilus-qvpwv "
Jul 10 12:07:52.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l4z2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-znt7p'
Jul 10 12:07:52.464: INFO: stderr: ""
Jul 10 12:07:52.464: INFO: stdout: ""
Jul 10 12:07:52.464: INFO: update-demo-nautilus-2l4z2 is created but not running
Jul 10 12:07:57.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-znt7p'
Jul 10 12:07:57.566: INFO: stderr: ""
Jul 10 12:07:57.566: INFO: stdout: "update-demo-nautilus-2l4z2 update-demo-nautilus-qvpwv "
Jul 10 12:07:57.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l4z2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-znt7p'
Jul 10 12:07:57.678: INFO: stderr: ""
Jul 10 12:07:57.678: INFO: stdout: "true"
Jul 10 12:07:57.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l4z2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-znt7p'
Jul 10 12:07:57.914: INFO: stderr: ""
Jul 10 12:07:57.914: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 10 12:07:57.914: INFO: validating pod update-demo-nautilus-2l4z2
Jul 10 12:07:58.174: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 10 12:07:58.174: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 10 12:07:58.174: INFO: update-demo-nautilus-2l4z2 is verified up and running
Jul 10 12:07:58.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qvpwv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-znt7p'
Jul 10 12:07:58.266: INFO: stderr: ""
Jul 10 12:07:58.266: INFO: stdout: "true"
Jul 10 12:07:58.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qvpwv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-znt7p'
Jul 10 12:07:58.362: INFO: stderr: ""
Jul 10 12:07:58.362: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 10 12:07:58.362: INFO: validating pod update-demo-nautilus-qvpwv
Jul 10 12:07:58.365: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 10 12:07:58.365: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 10 12:07:58.365: INFO: update-demo-nautilus-qvpwv is verified up and running
STEP: rolling-update to new replication controller
Jul 10 12:07:58.367: INFO: scanned /root for discovery docs: 
Jul 10 12:07:58.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-znt7p'
Jul 10 12:08:35.449: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jul 10 12:08:35.449: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 10 12:08:35.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-znt7p'
Jul 10 12:08:35.695: INFO: stderr: ""
Jul 10 12:08:35.695: INFO: stdout: "update-demo-kitten-hlz5h update-demo-kitten-wpx6x "
Jul 10 12:08:35.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hlz5h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-znt7p'
Jul 10 12:08:35.786: INFO: stderr: ""
Jul 10 12:08:35.786: INFO: stdout: "true"
Jul 10 12:08:35.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hlz5h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-znt7p'
Jul 10 12:08:35.883: INFO: stderr: ""
Jul 10 12:08:35.883: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jul 10 12:08:35.883: INFO: validating pod update-demo-kitten-hlz5h
Jul 10 12:08:35.888: INFO: got data: {
  "image": "kitten.jpg"
}

Jul 10 12:08:35.888: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jul 10 12:08:35.888: INFO: update-demo-kitten-hlz5h is verified up and running
Jul 10 12:08:35.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wpx6x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-znt7p'
Jul 10 12:08:35.990: INFO: stderr: ""
Jul 10 12:08:35.990: INFO: stdout: "true"
Jul 10 12:08:35.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wpx6x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-znt7p'
Jul 10 12:08:36.088: INFO: stderr: ""
Jul 10 12:08:36.088: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jul 10 12:08:36.088: INFO: validating pod update-demo-kitten-wpx6x
Jul 10 12:08:36.092: INFO: got data: {
  "image": "kitten.jpg"
}

Jul 10 12:08:36.092: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jul 10 12:08:36.092: INFO: update-demo-kitten-wpx6x is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:08:36.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-znt7p" for this suite.
Jul 10 12:09:00.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:09:00.246: INFO: namespace: e2e-tests-kubectl-znt7p, resource: bindings, ignored listing per whitelist
Jul 10 12:09:00.295: INFO: namespace e2e-tests-kubectl-znt7p deletion completed in 24.199602192s

• [SLOW TEST:75.344 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:09:00.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-j8jxw
Jul 10 12:09:06.474: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-j8jxw
STEP: checking the pod's current state and verifying that restartCount is present
Jul 10 12:09:06.477: INFO: Initial restart count of pod liveness-http is 0
Jul 10 12:09:30.717: INFO: Restart count of pod e2e-tests-container-probe-j8jxw/liveness-http is now 1 (24.240503226s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:09:30.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-j8jxw" for this suite.
Jul 10 12:09:38.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:09:39.010: INFO: namespace: e2e-tests-container-probe-j8jxw, resource: bindings, ignored listing per whitelist
Jul 10 12:09:39.020: INFO: namespace e2e-tests-container-probe-j8jxw deletion completed in 8.19193865s

• [SLOW TEST:38.725 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:09:39.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-x6q79
Jul 10 12:09:45.359: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-x6q79
STEP: checking the pod's current state and verifying that restartCount is present
Jul 10 12:09:45.363: INFO: Initial restart count of pod liveness-http is 0
Jul 10 12:09:57.638: INFO: Restart count of pod e2e-tests-container-probe-x6q79/liveness-http is now 1 (12.275653433s elapsed)
Jul 10 12:10:16.694: INFO: Restart count of pod e2e-tests-container-probe-x6q79/liveness-http is now 2 (31.330847249s elapsed)
Jul 10 12:10:37.851: INFO: Restart count of pod e2e-tests-container-probe-x6q79/liveness-http is now 3 (52.488268052s elapsed)
Jul 10 12:10:56.134: INFO: Restart count of pod e2e-tests-container-probe-x6q79/liveness-http is now 4 (1m10.771638969s elapsed)
Jul 10 12:12:01.549: INFO: Restart count of pod e2e-tests-container-probe-x6q79/liveness-http is now 5 (2m16.185744036s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:12:03.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-x6q79" for this suite.
Jul 10 12:12:13.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:12:14.053: INFO: namespace: e2e-tests-container-probe-x6q79, resource: bindings, ignored listing per whitelist
Jul 10 12:12:14.064: INFO: namespace e2e-tests-container-probe-x6q79 deletion completed in 10.42435411s

• [SLOW TEST:155.044 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:12:14.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jul 10 12:12:16.917: INFO: Waiting up to 5m0s for pod "var-expansion-982dce4f-c2a6-11ea-a406-0242ac11000f" in namespace "e2e-tests-var-expansion-bk8dg" to be "success or failure"
Jul 10 12:12:17.375: INFO: Pod "var-expansion-982dce4f-c2a6-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 457.96786ms
Jul 10 12:12:19.734: INFO: Pod "var-expansion-982dce4f-c2a6-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.817160141s
Jul 10 12:12:21.737: INFO: Pod "var-expansion-982dce4f-c2a6-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.820647535s
Jul 10 12:12:23.794: INFO: Pod "var-expansion-982dce4f-c2a6-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.877248729s
Jul 10 12:12:25.890: INFO: Pod "var-expansion-982dce4f-c2a6-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 8.973201533s
Jul 10 12:12:28.219: INFO: Pod "var-expansion-982dce4f-c2a6-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.302083804s
STEP: Saw pod success
Jul 10 12:12:28.219: INFO: Pod "var-expansion-982dce4f-c2a6-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:12:28.222: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-982dce4f-c2a6-11ea-a406-0242ac11000f container dapi-container: 
STEP: delete the pod
Jul 10 12:12:28.701: INFO: Waiting for pod var-expansion-982dce4f-c2a6-11ea-a406-0242ac11000f to disappear
Jul 10 12:12:28.714: INFO: Pod var-expansion-982dce4f-c2a6-11ea-a406-0242ac11000f no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:12:28.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-bk8dg" for this suite.
Jul 10 12:12:34.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:12:34.790: INFO: namespace: e2e-tests-var-expansion-bk8dg, resource: bindings, ignored listing per whitelist
Jul 10 12:12:34.807: INFO: namespace e2e-tests-var-expansion-bk8dg deletion completed in 6.088989437s

• [SLOW TEST:20.743 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:12:34.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jul 10 12:13:00.050: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-d5nw4 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 10 12:13:00.050: INFO: >>> kubeConfig: /root/.kube/config
I0710 12:13:00.087363       6 log.go:172] (0xc000a68fd0) (0xc001ca3540) Create stream
I0710 12:13:00.087412       6 log.go:172] (0xc000a68fd0) (0xc001ca3540) Stream added, broadcasting: 1
I0710 12:13:00.090156       6 log.go:172] (0xc000a68fd0) Reply frame received for 1
I0710 12:13:00.090211       6 log.go:172] (0xc000a68fd0) (0xc000596140) Create stream
I0710 12:13:00.090236       6 log.go:172] (0xc000a68fd0) (0xc000596140) Stream added, broadcasting: 3
I0710 12:13:00.091187       6 log.go:172] (0xc000a68fd0) Reply frame received for 3
I0710 12:13:00.091308       6 log.go:172] (0xc000a68fd0) (0xc000a86f00) Create stream
I0710 12:13:00.091348       6 log.go:172] (0xc000a68fd0) (0xc000a86f00) Stream added, broadcasting: 5
I0710 12:13:00.092219       6 log.go:172] (0xc000a68fd0) Reply frame received for 5
I0710 12:13:00.150814       6 log.go:172] (0xc000a68fd0) Data frame received for 5
I0710 12:13:00.150853       6 log.go:172] (0xc000a86f00) (5) Data frame handling
I0710 12:13:00.150882       6 log.go:172] (0xc000a68fd0) Data frame received for 3
I0710 12:13:00.150897       6 log.go:172] (0xc000596140) (3) Data frame handling
I0710 12:13:00.150912       6 log.go:172] (0xc000596140) (3) Data frame sent
I0710 12:13:00.150924       6 log.go:172] (0xc000a68fd0) Data frame received for 3
I0710 12:13:00.150936       6 log.go:172] (0xc000596140) (3) Data frame handling
I0710 12:13:00.152304       6 log.go:172] (0xc000a68fd0) Data frame received for 1
I0710 12:13:00.152329       6 log.go:172] (0xc001ca3540) (1) Data frame handling
I0710 12:13:00.152343       6 log.go:172] (0xc001ca3540) (1) Data frame sent
I0710 12:13:00.152377       6 log.go:172] (0xc000a68fd0) (0xc001ca3540) Stream removed, broadcasting: 1
I0710 12:13:00.152413       6 log.go:172] (0xc000a68fd0) Go away received
I0710 12:13:00.152497       6 log.go:172] (0xc000a68fd0) (0xc001ca3540) Stream removed, broadcasting: 1
I0710 12:13:00.152523       6 log.go:172] (0xc000a68fd0) (0xc000596140) Stream removed, broadcasting: 3
I0710 12:13:00.152531       6 log.go:172] (0xc000a68fd0) (0xc000a86f00) Stream removed, broadcasting: 5
Jul 10 12:13:00.152: INFO: Exec stderr: ""
Jul 10 12:13:00.152: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-d5nw4 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 10 12:13:00.152: INFO: >>> kubeConfig: /root/.kube/config
I0710 12:13:00.185571       6 log.go:172] (0xc001430370) (0xc001de1400) Create stream
I0710 12:13:00.185597       6 log.go:172] (0xc001430370) (0xc001de1400) Stream added, broadcasting: 1
I0710 12:13:00.190017       6 log.go:172] (0xc001430370) Reply frame received for 1
I0710 12:13:00.190155       6 log.go:172] (0xc001430370) (0xc000596320) Create stream
I0710 12:13:00.190186       6 log.go:172] (0xc001430370) (0xc000596320) Stream added, broadcasting: 3
I0710 12:13:00.192512       6 log.go:172] (0xc001430370) Reply frame received for 3
I0710 12:13:00.192565       6 log.go:172] (0xc001430370) (0xc0020363c0) Create stream
I0710 12:13:00.192587       6 log.go:172] (0xc001430370) (0xc0020363c0) Stream added, broadcasting: 5
I0710 12:13:00.193762       6 log.go:172] (0xc001430370) Reply frame received for 5
I0710 12:13:00.240298       6 log.go:172] (0xc001430370) Data frame received for 5
I0710 12:13:00.240334       6 log.go:172] (0xc0020363c0) (5) Data frame handling
I0710 12:13:00.240371       6 log.go:172] (0xc001430370) Data frame received for 3
I0710 12:13:00.240392       6 log.go:172] (0xc000596320) (3) Data frame handling
I0710 12:13:00.240425       6 log.go:172] (0xc000596320) (3) Data frame sent
I0710 12:13:00.240452       6 log.go:172] (0xc001430370) Data frame received for 3
I0710 12:13:00.240468       6 log.go:172] (0xc000596320) (3) Data frame handling
I0710 12:13:00.242289       6 log.go:172] (0xc001430370) Data frame received for 1
I0710 12:13:00.242323       6 log.go:172] (0xc001de1400) (1) Data frame handling
I0710 12:13:00.242349       6 log.go:172] (0xc001de1400) (1) Data frame sent
I0710 12:13:00.242375       6 log.go:172] (0xc001430370) (0xc001de1400) Stream removed, broadcasting: 1
I0710 12:13:00.242497       6 log.go:172] (0xc001430370) (0xc001de1400) Stream removed, broadcasting: 1
I0710 12:13:00.242535       6 log.go:172] (0xc001430370) (0xc000596320) Stream removed, broadcasting: 3
I0710 12:13:00.242562       6 log.go:172] (0xc001430370) (0xc0020363c0) Stream removed, broadcasting: 5
Jul 10 12:13:00.242: INFO: Exec stderr: ""
Jul 10 12:13:00.242: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-d5nw4 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
I0710 12:13:00.242669       6 log.go:172] (0xc001430370) Go away received
Jul 10 12:13:00.242: INFO: >>> kubeConfig: /root/.kube/config
I0710 12:13:00.278952       6 log.go:172] (0xc001430840) (0xc001de1680) Create stream
I0710 12:13:00.278985       6 log.go:172] (0xc001430840) (0xc001de1680) Stream added, broadcasting: 1
I0710 12:13:00.281105       6 log.go:172] (0xc001430840) Reply frame received for 1
I0710 12:13:00.281153       6 log.go:172] (0xc001430840) (0xc000596460) Create stream
I0710 12:13:00.281165       6 log.go:172] (0xc001430840) (0xc000596460) Stream added, broadcasting: 3
I0710 12:13:00.282251       6 log.go:172] (0xc001430840) Reply frame received for 3
I0710 12:13:00.282292       6 log.go:172] (0xc001430840) (0xc000596780) Create stream
I0710 12:13:00.282308       6 log.go:172] (0xc001430840) (0xc000596780) Stream added, broadcasting: 5
I0710 12:13:00.283315       6 log.go:172] (0xc001430840) Reply frame received for 5
I0710 12:13:00.351510       6 log.go:172] (0xc001430840) Data frame received for 5
I0710 12:13:00.351558       6 log.go:172] (0xc000596780) (5) Data frame handling
I0710 12:13:00.351610       6 log.go:172] (0xc001430840) Data frame received for 3
I0710 12:13:00.351631       6 log.go:172] (0xc000596460) (3) Data frame handling
I0710 12:13:00.351650       6 log.go:172] (0xc000596460) (3) Data frame sent
I0710 12:13:00.351669       6 log.go:172] (0xc001430840) Data frame received for 3
I0710 12:13:00.351686       6 log.go:172] (0xc000596460) (3) Data frame handling
I0710 12:13:00.353206       6 log.go:172] (0xc001430840) Data frame received for 1
I0710 12:13:00.353229       6 log.go:172] (0xc001de1680) (1) Data frame handling
I0710 12:13:00.353245       6 log.go:172] (0xc001de1680) (1) Data frame sent
I0710 12:13:00.353356       6 log.go:172] (0xc001430840) (0xc001de1680) Stream removed, broadcasting: 1
I0710 12:13:00.353472       6 log.go:172] (0xc001430840) (0xc001de1680) Stream removed, broadcasting: 1
I0710 12:13:00.353488       6 log.go:172] (0xc001430840) (0xc000596460) Stream removed, broadcasting: 3
I0710 12:13:00.353555       6 log.go:172] (0xc001430840) Go away received
I0710 12:13:00.353796       6 log.go:172] (0xc001430840) (0xc000596780) Stream removed, broadcasting: 5
Jul 10 12:13:00.353: INFO: Exec stderr: ""
Jul 10 12:13:00.353: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-d5nw4 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 10 12:13:00.353: INFO: >>> kubeConfig: /root/.kube/config
I0710 12:13:00.387857       6 log.go:172] (0xc000a694a0) (0xc001ca37c0) Create stream
I0710 12:13:00.387880       6 log.go:172] (0xc000a694a0) (0xc001ca37c0) Stream added, broadcasting: 1
I0710 12:13:00.390342       6 log.go:172] (0xc000a694a0) Reply frame received for 1
I0710 12:13:00.390388       6 log.go:172] (0xc000a694a0) (0xc000a86fa0) Create stream
I0710 12:13:00.390405       6 log.go:172] (0xc000a694a0) (0xc000a86fa0) Stream added, broadcasting: 3
I0710 12:13:00.391416       6 log.go:172] (0xc000a694a0) Reply frame received for 3
I0710 12:13:00.391462       6 log.go:172] (0xc000a694a0) (0xc000596820) Create stream
I0710 12:13:00.391477       6 log.go:172] (0xc000a694a0) (0xc000596820) Stream added, broadcasting: 5
I0710 12:13:00.392382       6 log.go:172] (0xc000a694a0) Reply frame received for 5
I0710 12:13:00.455647       6 log.go:172] (0xc000a694a0) Data frame received for 5
I0710 12:13:00.455684       6 log.go:172] (0xc000596820) (5) Data frame handling
I0710 12:13:00.455711       6 log.go:172] (0xc000a694a0) Data frame received for 3
I0710 12:13:00.455725       6 log.go:172] (0xc000a86fa0) (3) Data frame handling
I0710 12:13:00.455740       6 log.go:172] (0xc000a86fa0) (3) Data frame sent
I0710 12:13:00.455754       6 log.go:172] (0xc000a694a0) Data frame received for 3
I0710 12:13:00.455766       6 log.go:172] (0xc000a86fa0) (3) Data frame handling
I0710 12:13:00.456914       6 log.go:172] (0xc000a694a0) Data frame received for 1
I0710 12:13:00.456928       6 log.go:172] (0xc001ca37c0) (1) Data frame handling
I0710 12:13:00.456945       6 log.go:172] (0xc001ca37c0) (1) Data frame sent
I0710 12:13:00.456960       6 log.go:172] (0xc000a694a0) (0xc001ca37c0) Stream removed, broadcasting: 1
I0710 12:13:00.457042       6 log.go:172] (0xc000a694a0) Go away received
I0710 12:13:00.457105       6 log.go:172] (0xc000a694a0) (0xc001ca37c0) Stream removed, broadcasting: 1
I0710 12:13:00.457148       6 log.go:172] (0xc000a694a0) (0xc000a86fa0) Stream removed, broadcasting: 3
I0710 12:13:00.457163       6 log.go:172] (0xc000a694a0) (0xc000596820) Stream removed, broadcasting: 5
Jul 10 12:13:00.457: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jul 10 12:13:00.457: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-d5nw4 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 10 12:13:00.457: INFO: >>> kubeConfig: /root/.kube/config
I0710 12:13:00.489988       6 log.go:172] (0xc0026ae2c0) (0xc000596b40) Create stream
I0710 12:13:00.490018       6 log.go:172] (0xc0026ae2c0) (0xc000596b40) Stream added, broadcasting: 1
I0710 12:13:00.492302       6 log.go:172] (0xc0026ae2c0) Reply frame received for 1
I0710 12:13:00.492332       6 log.go:172] (0xc0026ae2c0) (0xc001ca3860) Create stream
I0710 12:13:00.492341       6 log.go:172] (0xc0026ae2c0) (0xc001ca3860) Stream added, broadcasting: 3
I0710 12:13:00.493095       6 log.go:172] (0xc0026ae2c0) Reply frame received for 3
I0710 12:13:00.493127       6 log.go:172] (0xc0026ae2c0) (0xc001de1720) Create stream
I0710 12:13:00.493138       6 log.go:172] (0xc0026ae2c0) (0xc001de1720) Stream added, broadcasting: 5
I0710 12:13:00.493870       6 log.go:172] (0xc0026ae2c0) Reply frame received for 5
I0710 12:13:00.563349       6 log.go:172] (0xc0026ae2c0) Data frame received for 5
I0710 12:13:00.563405       6 log.go:172] (0xc001de1720) (5) Data frame handling
I0710 12:13:00.563446       6 log.go:172] (0xc0026ae2c0) Data frame received for 3
I0710 12:13:00.563461       6 log.go:172] (0xc001ca3860) (3) Data frame handling
I0710 12:13:00.563491       6 log.go:172] (0xc001ca3860) (3) Data frame sent
I0710 12:13:00.563504       6 log.go:172] (0xc0026ae2c0) Data frame received for 3
I0710 12:13:00.563515       6 log.go:172] (0xc001ca3860) (3) Data frame handling
I0710 12:13:00.565318       6 log.go:172] (0xc0026ae2c0) Data frame received for 1
I0710 12:13:00.565360       6 log.go:172] (0xc000596b40) (1) Data frame handling
I0710 12:13:00.565383       6 log.go:172] (0xc000596b40) (1) Data frame sent
I0710 12:13:00.565397       6 log.go:172] (0xc0026ae2c0) (0xc000596b40) Stream removed, broadcasting: 1
I0710 12:13:00.565419       6 log.go:172] (0xc0026ae2c0) Go away received
I0710 12:13:00.565613       6 log.go:172] (0xc0026ae2c0) (0xc000596b40) Stream removed, broadcasting: 1
I0710 12:13:00.565645       6 log.go:172] (0xc0026ae2c0) (0xc001ca3860) Stream removed, broadcasting: 3
I0710 12:13:00.565662       6 log.go:172] (0xc0026ae2c0) (0xc001de1720) Stream removed, broadcasting: 5
Jul 10 12:13:00.565: INFO: Exec stderr: ""
Jul 10 12:13:00.565: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-d5nw4 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 10 12:13:00.565: INFO: >>> kubeConfig: /root/.kube/config
I0710 12:13:00.598758       6 log.go:172] (0xc001b3e2c0) (0xc0020366e0) Create stream
I0710 12:13:00.598785       6 log.go:172] (0xc001b3e2c0) (0xc0020366e0) Stream added, broadcasting: 1
I0710 12:13:00.601423       6 log.go:172] (0xc001b3e2c0) Reply frame received for 1
I0710 12:13:00.601467       6 log.go:172] (0xc001b3e2c0) (0xc000a87180) Create stream
I0710 12:13:00.601484       6 log.go:172] (0xc001b3e2c0) (0xc000a87180) Stream added, broadcasting: 3
I0710 12:13:00.602378       6 log.go:172] (0xc001b3e2c0) Reply frame received for 3
I0710 12:13:00.602434       6 log.go:172] (0xc001b3e2c0) (0xc001ca3900) Create stream
I0710 12:13:00.602449       6 log.go:172] (0xc001b3e2c0) (0xc001ca3900) Stream added, broadcasting: 5
I0710 12:13:00.603234       6 log.go:172] (0xc001b3e2c0) Reply frame received for 5
I0710 12:13:00.658824       6 log.go:172] (0xc001b3e2c0) Data frame received for 5
I0710 12:13:00.658868       6 log.go:172] (0xc001ca3900) (5) Data frame handling
I0710 12:13:00.658950       6 log.go:172] (0xc001b3e2c0) Data frame received for 3
I0710 12:13:00.659012       6 log.go:172] (0xc000a87180) (3) Data frame handling
I0710 12:13:00.659057       6 log.go:172] (0xc000a87180) (3) Data frame sent
I0710 12:13:00.659086       6 log.go:172] (0xc001b3e2c0) Data frame received for 3
I0710 12:13:00.659105       6 log.go:172] (0xc000a87180) (3) Data frame handling
I0710 12:13:00.660689       6 log.go:172] (0xc001b3e2c0) Data frame received for 1
I0710 12:13:00.660708       6 log.go:172] (0xc0020366e0) (1) Data frame handling
I0710 12:13:00.660719       6 log.go:172] (0xc0020366e0) (1) Data frame sent
I0710 12:13:00.660808       6 log.go:172] (0xc001b3e2c0) (0xc0020366e0) Stream removed, broadcasting: 1
I0710 12:13:00.660881       6 log.go:172] (0xc001b3e2c0) Go away received
I0710 12:13:00.660935       6 log.go:172] (0xc001b3e2c0) (0xc0020366e0) Stream removed, broadcasting: 1
I0710 12:13:00.660972       6 log.go:172] (0xc001b3e2c0) (0xc000a87180) Stream removed, broadcasting: 3
I0710 12:13:00.661014       6 log.go:172] (0xc001b3e2c0) (0xc001ca3900) Stream removed, broadcasting: 5
Jul 10 12:13:00.661: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jul 10 12:13:00.661: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-d5nw4 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 10 12:13:00.661: INFO: >>> kubeConfig: /root/.kube/config
I0710 12:13:00.689227       6 log.go:172] (0xc000a69970) (0xc001ca3ae0) Create stream
I0710 12:13:00.689252       6 log.go:172] (0xc000a69970) (0xc001ca3ae0) Stream added, broadcasting: 1
I0710 12:13:00.692695       6 log.go:172] (0xc000a69970) Reply frame received for 1
I0710 12:13:00.692814       6 log.go:172] (0xc000a69970) (0xc002036820) Create stream
I0710 12:13:00.692830       6 log.go:172] (0xc000a69970) (0xc002036820) Stream added, broadcasting: 3
I0710 12:13:00.694022       6 log.go:172] (0xc000a69970) Reply frame received for 3
I0710 12:13:00.694078       6 log.go:172] (0xc000a69970) (0xc0020368c0) Create stream
I0710 12:13:00.694101       6 log.go:172] (0xc000a69970) (0xc0020368c0) Stream added, broadcasting: 5
I0710 12:13:00.694922       6 log.go:172] (0xc000a69970) Reply frame received for 5
I0710 12:13:00.754520       6 log.go:172] (0xc000a69970) Data frame received for 3
I0710 12:13:00.754574       6 log.go:172] (0xc002036820) (3) Data frame handling
I0710 12:13:00.754597       6 log.go:172] (0xc002036820) (3) Data frame sent
I0710 12:13:00.754613       6 log.go:172] (0xc000a69970) Data frame received for 3
I0710 12:13:00.754656       6 log.go:172] (0xc002036820) (3) Data frame handling
I0710 12:13:00.754677       6 log.go:172] (0xc000a69970) Data frame received for 5
I0710 12:13:00.754693       6 log.go:172] (0xc0020368c0) (5) Data frame handling
I0710 12:13:00.756247       6 log.go:172] (0xc000a69970) Data frame received for 1
I0710 12:13:00.756295       6 log.go:172] (0xc001ca3ae0) (1) Data frame handling
I0710 12:13:00.756334       6 log.go:172] (0xc001ca3ae0) (1) Data frame sent
I0710 12:13:00.756399       6 log.go:172] (0xc000a69970) (0xc001ca3ae0) Stream removed, broadcasting: 1
I0710 12:13:00.756437       6 log.go:172] (0xc000a69970) Go away received
I0710 12:13:00.756517       6 log.go:172] (0xc000a69970) (0xc001ca3ae0) Stream removed, broadcasting: 1
I0710 12:13:00.756538       6 log.go:172] (0xc000a69970) (0xc002036820) Stream removed, broadcasting: 3
I0710 12:13:00.756549       6 log.go:172] (0xc000a69970) (0xc0020368c0) Stream removed, broadcasting: 5
Jul 10 12:13:00.756: INFO: Exec stderr: ""
Jul 10 12:13:00.756: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-d5nw4 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 10 12:13:00.756: INFO: >>> kubeConfig: /root/.kube/config
I0710 12:13:00.815847       6 log.go:172] (0xc0026ae790) (0xc000596f00) Create stream
I0710 12:13:00.815891       6 log.go:172] (0xc0026ae790) (0xc000596f00) Stream added, broadcasting: 1
I0710 12:13:00.818931       6 log.go:172] (0xc0026ae790) Reply frame received for 1
I0710 12:13:00.818984       6 log.go:172] (0xc0026ae790) (0xc000a87220) Create stream
I0710 12:13:00.819000       6 log.go:172] (0xc0026ae790) (0xc000a87220) Stream added, broadcasting: 3
I0710 12:13:00.820004       6 log.go:172] (0xc0026ae790) Reply frame received for 3
I0710 12:13:00.820060       6 log.go:172] (0xc0026ae790) (0xc001ca3b80) Create stream
I0710 12:13:00.820096       6 log.go:172] (0xc0026ae790) (0xc001ca3b80) Stream added, broadcasting: 5
I0710 12:13:00.821170       6 log.go:172] (0xc0026ae790) Reply frame received for 5
I0710 12:13:00.885494       6 log.go:172] (0xc0026ae790) Data frame received for 3
I0710 12:13:00.885529       6 log.go:172] (0xc000a87220) (3) Data frame handling
I0710 12:13:00.885547       6 log.go:172] (0xc000a87220) (3) Data frame sent
I0710 12:13:00.885560       6 log.go:172] (0xc0026ae790) Data frame received for 3
I0710 12:13:00.885570       6 log.go:172] (0xc000a87220) (3) Data frame handling
I0710 12:13:00.886089       6 log.go:172] (0xc0026ae790) Data frame received for 5
I0710 12:13:00.886130       6 log.go:172] (0xc001ca3b80) (5) Data frame handling
I0710 12:13:00.887270       6 log.go:172] (0xc0026ae790) Data frame received for 1
I0710 12:13:00.887292       6 log.go:172] (0xc000596f00) (1) Data frame handling
I0710 12:13:00.887311       6 log.go:172] (0xc000596f00) (1) Data frame sent
I0710 12:13:00.887334       6 log.go:172] (0xc0026ae790) (0xc000596f00) Stream removed, broadcasting: 1
I0710 12:13:00.887424       6 log.go:172] (0xc0026ae790) (0xc000596f00) Stream removed, broadcasting: 1
I0710 12:13:00.887440       6 log.go:172] (0xc0026ae790) (0xc000a87220) Stream removed, broadcasting: 3
I0710 12:13:00.887494       6 log.go:172] (0xc0026ae790) Go away received
I0710 12:13:00.887596       6 log.go:172] (0xc0026ae790) (0xc001ca3b80) Stream removed, broadcasting: 5
Jul 10 12:13:00.887: INFO: Exec stderr: ""
Jul 10 12:13:00.887: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-d5nw4 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 10 12:13:00.887: INFO: >>> kubeConfig: /root/.kube/config
I0710 12:13:01.040242       6 log.go:172] (0xc000a69e40) (0xc001ca3e00) Create stream
I0710 12:13:01.040270       6 log.go:172] (0xc000a69e40) (0xc001ca3e00) Stream added, broadcasting: 1
I0710 12:13:01.041915       6 log.go:172] (0xc000a69e40) Reply frame received for 1
I0710 12:13:01.041956       6 log.go:172] (0xc000a69e40) (0xc002036960) Create stream
I0710 12:13:01.041966       6 log.go:172] (0xc000a69e40) (0xc002036960) Stream added, broadcasting: 3
I0710 12:13:01.042793       6 log.go:172] (0xc000a69e40) Reply frame received for 3
I0710 12:13:01.042845       6 log.go:172] (0xc000a69e40) (0xc0005970e0) Create stream
I0710 12:13:01.042859       6 log.go:172] (0xc000a69e40) (0xc0005970e0) Stream added, broadcasting: 5
I0710 12:13:01.043596       6 log.go:172] (0xc000a69e40) Reply frame received for 5
I0710 12:13:01.098397       6 log.go:172] (0xc000a69e40) Data frame received for 3
I0710 12:13:01.098449       6 log.go:172] (0xc002036960) (3) Data frame handling
I0710 12:13:01.098472       6 log.go:172] (0xc002036960) (3) Data frame sent
I0710 12:13:01.098484       6 log.go:172] (0xc000a69e40) Data frame received for 3
I0710 12:13:01.098503       6 log.go:172] (0xc002036960) (3) Data frame handling
I0710 12:13:01.098537       6 log.go:172] (0xc000a69e40) Data frame received for 5
I0710 12:13:01.098563       6 log.go:172] (0xc0005970e0) (5) Data frame handling
I0710 12:13:01.099918       6 log.go:172] (0xc000a69e40) Data frame received for 1
I0710 12:13:01.099943       6 log.go:172] (0xc001ca3e00) (1) Data frame handling
I0710 12:13:01.099961       6 log.go:172] (0xc001ca3e00) (1) Data frame sent
I0710 12:13:01.099974       6 log.go:172] (0xc000a69e40) (0xc001ca3e00) Stream removed, broadcasting: 1
I0710 12:13:01.099997       6 log.go:172] (0xc000a69e40) Go away received
I0710 12:13:01.100087       6 log.go:172] (0xc000a69e40) (0xc001ca3e00) Stream removed, broadcasting: 1
I0710 12:13:01.100129       6 log.go:172] (0xc000a69e40) (0xc002036960) Stream removed, broadcasting: 3
I0710 12:13:01.100151       6 log.go:172] (0xc000a69e40) (0xc0005970e0) Stream removed, broadcasting: 5
Jul 10 12:13:01.100: INFO: Exec stderr: ""
Jul 10 12:13:01.100: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-d5nw4 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 10 12:13:01.100: INFO: >>> kubeConfig: /root/.kube/config
I0710 12:13:01.239437       6 log.go:172] (0xc000a68bb0) (0xc00228a0a0) Create stream
I0710 12:13:01.239485       6 log.go:172] (0xc000a68bb0) (0xc00228a0a0) Stream added, broadcasting: 1
I0710 12:13:01.241448       6 log.go:172] (0xc000a68bb0) Reply frame received for 1
I0710 12:13:01.241499       6 log.go:172] (0xc000a68bb0) (0xc00243a000) Create stream
I0710 12:13:01.241513       6 log.go:172] (0xc000a68bb0) (0xc00243a000) Stream added, broadcasting: 3
I0710 12:13:01.242511       6 log.go:172] (0xc000a68bb0) Reply frame received for 3
I0710 12:13:01.242546       6 log.go:172] (0xc000a68bb0) (0xc00228a140) Create stream
I0710 12:13:01.242559       6 log.go:172] (0xc000a68bb0) (0xc00228a140) Stream added, broadcasting: 5
I0710 12:13:01.243408       6 log.go:172] (0xc000a68bb0) Reply frame received for 5
I0710 12:13:01.300159       6 log.go:172] (0xc000a68bb0) Data frame received for 3
I0710 12:13:01.300204       6 log.go:172] (0xc00243a000) (3) Data frame handling
I0710 12:13:01.300221       6 log.go:172] (0xc00243a000) (3) Data frame sent
I0710 12:13:01.300234       6 log.go:172] (0xc000a68bb0) Data frame received for 3
I0710 12:13:01.300245       6 log.go:172] (0xc00243a000) (3) Data frame handling
I0710 12:13:01.300273       6 log.go:172] (0xc000a68bb0) Data frame received for 5
I0710 12:13:01.300289       6 log.go:172] (0xc00228a140) (5) Data frame handling
I0710 12:13:01.301485       6 log.go:172] (0xc000a68bb0) Data frame received for 1
I0710 12:13:01.301497       6 log.go:172] (0xc00228a0a0) (1) Data frame handling
I0710 12:13:01.301504       6 log.go:172] (0xc00228a0a0) (1) Data frame sent
I0710 12:13:01.301517       6 log.go:172] (0xc000a68bb0) (0xc00228a0a0) Stream removed, broadcasting: 1
I0710 12:13:01.301526       6 log.go:172] (0xc000a68bb0) Go away received
I0710 12:13:01.301616       6 log.go:172] (0xc000a68bb0) (0xc00228a0a0) Stream removed, broadcasting: 1
I0710 12:13:01.301632       6 log.go:172] (0xc000a68bb0) (0xc00243a000) Stream removed, broadcasting: 3
I0710 12:13:01.301653       6 log.go:172] (0xc000a68bb0) (0xc00228a140) Stream removed, broadcasting: 5
Jul 10 12:13:01.301: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:13:01.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-d5nw4" for this suite.
Jul 10 12:13:59.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:13:59.376: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-d5nw4, resource: bindings, ignored listing per whitelist
Jul 10 12:13:59.410: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-d5nw4 deletion completed in 58.105182542s

• [SLOW TEST:84.603 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:13:59.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jul 10 12:13:59.578: INFO: Pod name pod-release: Found 0 pods out of 1
Jul 10 12:14:04.582: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:14:05.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-sxdk8" for this suite.
Jul 10 12:14:15.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:14:15.754: INFO: namespace: e2e-tests-replication-controller-sxdk8, resource: bindings, ignored listing per whitelist
Jul 10 12:14:15.820: INFO: namespace e2e-tests-replication-controller-sxdk8 deletion completed in 10.122030691s

• [SLOW TEST:16.410 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:14:15.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-d44xv
Jul 10 12:14:22.251: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-d44xv
STEP: checking the pod's current state and verifying that restartCount is present
Jul 10 12:14:22.253: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:18:23.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-d44xv" for this suite.
Jul 10 12:18:33.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:18:34.117: INFO: namespace: e2e-tests-container-probe-d44xv, resource: bindings, ignored listing per whitelist
Jul 10 12:18:34.168: INFO: namespace e2e-tests-container-probe-d44xv deletion completed in 10.528806037s

• [SLOW TEST:258.348 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:18:34.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-p6gzx
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-p6gzx
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-p6gzx
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-p6gzx
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-p6gzx
Jul 10 12:18:44.334: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-p6gzx, name: ss-0, uid: 7be55cd0-c2a7-11ea-b2c9-0242ac120008, status phase: Pending. Waiting for statefulset controller to delete.
Jul 10 12:18:47.544: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-p6gzx, name: ss-0, uid: 7be55cd0-c2a7-11ea-b2c9-0242ac120008, status phase: Failed. Waiting for statefulset controller to delete.
Jul 10 12:18:47.791: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-p6gzx, name: ss-0, uid: 7be55cd0-c2a7-11ea-b2c9-0242ac120008, status phase: Failed. Waiting for statefulset controller to delete.
Jul 10 12:18:48.040: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-p6gzx
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-p6gzx
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-p6gzx and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul 10 12:18:57.926: INFO: Deleting all statefulset in ns e2e-tests-statefulset-p6gzx
Jul 10 12:18:57.982: INFO: Scaling statefulset ss to 0
Jul 10 12:19:08.818: INFO: Waiting for statefulset status.replicas updated to 0
Jul 10 12:19:08.821: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:19:09.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-p6gzx" for this suite.
Jul 10 12:19:27.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:19:27.541: INFO: namespace: e2e-tests-statefulset-p6gzx, resource: bindings, ignored listing per whitelist
Jul 10 12:19:27.573: INFO: namespace e2e-tests-statefulset-p6gzx deletion completed in 18.355704729s

• [SLOW TEST:53.405 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:19:27.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-bv24n
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-bv24n to expose endpoints map[]
Jul 10 12:19:27.845: INFO: Get endpoints failed (2.714678ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jul 10 12:19:28.848: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-bv24n exposes endpoints map[] (1.005958022s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-bv24n
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-bv24n to expose endpoints map[pod1:[100]]
Jul 10 12:19:34.301: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.446838392s elapsed, will retry)
Jul 10 12:19:37.858: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-bv24n exposes endpoints map[pod1:[100]] (9.002968359s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-bv24n
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-bv24n to expose endpoints map[pod1:[100] pod2:[101]]
Jul 10 12:19:43.135: INFO: Unexpected endpoints: found map[9a139e24-c2a7-11ea-b2c9-0242ac120008:[100]], expected map[pod1:[100] pod2:[101]] (5.275116047s elapsed, will retry)
Jul 10 12:19:44.451: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-bv24n exposes endpoints map[pod1:[100] pod2:[101]] (6.591086795s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-bv24n
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-bv24n to expose endpoints map[pod2:[101]]
Jul 10 12:19:46.127: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-bv24n exposes endpoints map[pod2:[101]] (1.672184913s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-bv24n
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-bv24n to expose endpoints map[]
Jul 10 12:19:46.701: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-bv24n exposes endpoints map[] (179.975548ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:19:48.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-bv24n" for this suite.
Jul 10 12:19:59.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:19:59.776: INFO: namespace: e2e-tests-services-bv24n, resource: bindings, ignored listing per whitelist
Jul 10 12:19:59.785: INFO: namespace e2e-tests-services-bv24n deletion completed in 11.536238793s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:32.211 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:19:59.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 10 12:19:59.897: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:20:06.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-r5sd4" for this suite.
Jul 10 12:20:50.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:20:50.285: INFO: namespace: e2e-tests-pods-r5sd4, resource: bindings, ignored listing per whitelist
Jul 10 12:20:50.313: INFO: namespace e2e-tests-pods-r5sd4 deletion completed in 44.259369209s

• [SLOW TEST:50.527 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:20:50.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-whkxd/configmap-test-cb46d505-c2a7-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume configMaps
Jul 10 12:20:52.440: INFO: Waiting up to 5m0s for pod "pod-configmaps-cb504fb6-c2a7-11ea-a406-0242ac11000f" in namespace "e2e-tests-configmap-whkxd" to be "success or failure"
Jul 10 12:20:52.505: INFO: Pod "pod-configmaps-cb504fb6-c2a7-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 65.679355ms
Jul 10 12:20:54.510: INFO: Pod "pod-configmaps-cb504fb6-c2a7-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07002385s
Jul 10 12:20:56.514: INFO: Pod "pod-configmaps-cb504fb6-c2a7-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07422968s
Jul 10 12:20:58.519: INFO: Pod "pod-configmaps-cb504fb6-c2a7-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079495692s
Jul 10 12:21:02.168: INFO: Pod "pod-configmaps-cb504fb6-c2a7-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.728060419s
Jul 10 12:21:04.174: INFO: Pod "pod-configmaps-cb504fb6-c2a7-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.734423509s
STEP: Saw pod success
Jul 10 12:21:04.174: INFO: Pod "pod-configmaps-cb504fb6-c2a7-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:21:04.177: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-cb504fb6-c2a7-11ea-a406-0242ac11000f container env-test: 
STEP: delete the pod
Jul 10 12:21:04.613: INFO: Waiting for pod pod-configmaps-cb504fb6-c2a7-11ea-a406-0242ac11000f to disappear
Jul 10 12:21:04.660: INFO: Pod pod-configmaps-cb504fb6-c2a7-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:21:04.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-whkxd" for this suite.
Jul 10 12:21:12.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:21:12.805: INFO: namespace: e2e-tests-configmap-whkxd, resource: bindings, ignored listing per whitelist
Jul 10 12:21:12.832: INFO: namespace e2e-tests-configmap-whkxd deletion completed in 8.168843291s

• [SLOW TEST:22.519 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:21:12.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jul 10 12:21:13.244: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 10 12:21:13.327: INFO: Waiting for terminating namespaces to be deleted...
Jul 10 12:21:13.330: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Jul 10 12:21:13.335: INFO: kube-proxy-8wnps from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded)
Jul 10 12:21:13.335: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 10 12:21:13.335: INFO: kindnet-2w5m4 from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded)
Jul 10 12:21:13.335: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 10 12:21:13.335: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Jul 10 12:21:13.340: INFO: kube-proxy-b6f6s from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded)
Jul 10 12:21:13.340: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 10 12:21:13.340: INFO: kindnet-hpnvh from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded)
Jul 10 12:21:13.340: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-dbf8e95a-c2a7-11ea-a406-0242ac11000f 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-dbf8e95a-c2a7-11ea-a406-0242ac11000f off the node hunter-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-dbf8e95a-c2a7-11ea-a406-0242ac11000f
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:21:23.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-thvm2" for this suite.
Jul 10 12:21:40.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:21:40.027: INFO: namespace: e2e-tests-sched-pred-thvm2, resource: bindings, ignored listing per whitelist
Jul 10 12:21:40.063: INFO: namespace e2e-tests-sched-pred-thvm2 deletion completed in 16.394640135s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:27.231 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:21:40.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul 10 12:21:51.004: INFO: Successfully updated pod "pod-update-e87640c2-c2a7-11ea-a406-0242ac11000f"
STEP: verifying the updated pod is in kubernetes
Jul 10 12:21:51.018: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:21:51.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-8l544" for this suite.
Jul 10 12:22:15.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:22:15.680: INFO: namespace: e2e-tests-pods-8l544, resource: bindings, ignored listing per whitelist
Jul 10 12:22:15.732: INFO: namespace e2e-tests-pods-8l544 deletion completed in 24.710512238s

• [SLOW TEST:35.668 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:22:15.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 10 12:22:16.488: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fdfdc7a8-c2a7-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-24v7b" to be "success or failure"
Jul 10 12:22:16.689: INFO: Pod "downwardapi-volume-fdfdc7a8-c2a7-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 201.621742ms
Jul 10 12:22:18.693: INFO: Pod "downwardapi-volume-fdfdc7a8-c2a7-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205694464s
Jul 10 12:22:20.753: INFO: Pod "downwardapi-volume-fdfdc7a8-c2a7-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.264892137s
Jul 10 12:22:22.756: INFO: Pod "downwardapi-volume-fdfdc7a8-c2a7-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.268268474s
Jul 10 12:22:24.759: INFO: Pod "downwardapi-volume-fdfdc7a8-c2a7-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.271628553s
STEP: Saw pod success
Jul 10 12:22:24.759: INFO: Pod "downwardapi-volume-fdfdc7a8-c2a7-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:22:24.762: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-fdfdc7a8-c2a7-11ea-a406-0242ac11000f container client-container: 
STEP: delete the pod
Jul 10 12:22:24.876: INFO: Waiting for pod downwardapi-volume-fdfdc7a8-c2a7-11ea-a406-0242ac11000f to disappear
Jul 10 12:22:25.287: INFO: Pod downwardapi-volume-fdfdc7a8-c2a7-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:22:25.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-24v7b" for this suite.
Jul 10 12:22:31.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:22:31.869: INFO: namespace: e2e-tests-projected-24v7b, resource: bindings, ignored listing per whitelist
Jul 10 12:22:31.893: INFO: namespace e2e-tests-projected-24v7b deletion completed in 6.602494259s

• [SLOW TEST:16.161 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:22:31.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jul 10 12:22:32.361: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 10 12:22:32.539: INFO: Waiting for terminating namespaces to be deleted...
Jul 10 12:22:32.542: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Jul 10 12:22:32.546: INFO: kube-proxy-8wnps from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded)
Jul 10 12:22:32.546: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 10 12:22:32.546: INFO: kindnet-2w5m4 from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded)
Jul 10 12:22:32.546: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 10 12:22:32.546: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Jul 10 12:22:32.550: INFO: kube-proxy-b6f6s from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded)
Jul 10 12:22:32.551: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 10 12:22:32.551: INFO: kindnet-hpnvh from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded)
Jul 10 12:22:32.551: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-worker
STEP: verifying the node has the label node hunter-worker2
Jul 10 12:22:33.396: INFO: Pod kindnet-2w5m4 requesting resource cpu=100m on Node hunter-worker
Jul 10 12:22:33.396: INFO: Pod kindnet-hpnvh requesting resource cpu=100m on Node hunter-worker2
Jul 10 12:22:33.396: INFO: Pod kube-proxy-8wnps requesting resource cpu=0m on Node hunter-worker
Jul 10 12:22:33.396: INFO: Pod kube-proxy-b6f6s requesting resource cpu=0m on Node hunter-worker2
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0812d713-c2a8-11ea-a406-0242ac11000f.16206393999224b3], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-94sjl/filler-pod-0812d713-c2a8-11ea-a406-0242ac11000f to hunter-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0812d713-c2a8-11ea-a406-0242ac11000f.16206393f532fcd8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0812d713-c2a8-11ea-a406-0242ac11000f.1620639553e49bdf], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0812d713-c2a8-11ea-a406-0242ac11000f.16206396047fcf22], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-081399bb-c2a8-11ea-a406-0242ac11000f.162063939c68dd0c], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-94sjl/filler-pod-081399bb-c2a8-11ea-a406-0242ac11000f to hunter-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-081399bb-c2a8-11ea-a406-0242ac11000f.16206394cb6697f2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-081399bb-c2a8-11ea-a406-0242ac11000f.162063962148b5cb], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-081399bb-c2a8-11ea-a406-0242ac11000f.162063963066a405], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162063966896ef92], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node hunter-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node hunter-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:22:46.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-94sjl" for this suite.
Jul 10 12:23:02.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:23:02.903: INFO: namespace: e2e-tests-sched-pred-94sjl, resource: bindings, ignored listing per whitelist
Jul 10 12:23:02.971: INFO: namespace e2e-tests-sched-pred-94sjl deletion completed in 16.092126207s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:31.078 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:23:02.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-1ab00005-c2a8-11ea-a406-0242ac11000f
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-1ab00005-c2a8-11ea-a406-0242ac11000f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:23:11.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ndlff" for this suite.
Jul 10 12:23:35.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:23:35.435: INFO: namespace: e2e-tests-projected-ndlff, resource: bindings, ignored listing per whitelist
Jul 10 12:23:35.445: INFO: namespace e2e-tests-projected-ndlff deletion completed in 24.27645316s

• [SLOW TEST:32.474 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:23:35.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-2d9f4ef2-c2a8-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume configMaps
Jul 10 12:23:36.694: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2da6ef46-c2a8-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-7rz4t" to be "success or failure"
Jul 10 12:23:36.712: INFO: Pod "pod-projected-configmaps-2da6ef46-c2a8-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.248295ms
Jul 10 12:23:38.717: INFO: Pod "pod-projected-configmaps-2da6ef46-c2a8-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022596669s
Jul 10 12:23:40.720: INFO: Pod "pod-projected-configmaps-2da6ef46-c2a8-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026290827s
Jul 10 12:23:42.726: INFO: Pod "pod-projected-configmaps-2da6ef46-c2a8-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 6.031694576s
Jul 10 12:23:46.338: INFO: Pod "pod-projected-configmaps-2da6ef46-c2a8-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.643750945s
STEP: Saw pod success
Jul 10 12:23:46.338: INFO: Pod "pod-projected-configmaps-2da6ef46-c2a8-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:23:46.622: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-2da6ef46-c2a8-11ea-a406-0242ac11000f container projected-configmap-volume-test: 
STEP: delete the pod
Jul 10 12:23:47.152: INFO: Waiting for pod pod-projected-configmaps-2da6ef46-c2a8-11ea-a406-0242ac11000f to disappear
Jul 10 12:23:47.222: INFO: Pod pod-projected-configmaps-2da6ef46-c2a8-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:23:47.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7rz4t" for this suite.
Jul 10 12:23:53.401: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:23:53.416: INFO: namespace: e2e-tests-projected-7rz4t, resource: bindings, ignored listing per whitelist
Jul 10 12:23:53.461: INFO: namespace e2e-tests-projected-7rz4t deletion completed in 6.235315279s

• [SLOW TEST:18.016 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:23:53.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul 10 12:23:53.591: INFO: Waiting up to 5m0s for pod "pod-37dea1be-c2a8-11ea-a406-0242ac11000f" in namespace "e2e-tests-emptydir-vvcxx" to be "success or failure"
Jul 10 12:23:53.606: INFO: Pod "pod-37dea1be-c2a8-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.835698ms
Jul 10 12:23:55.611: INFO: Pod "pod-37dea1be-c2a8-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019362405s
Jul 10 12:23:57.615: INFO: Pod "pod-37dea1be-c2a8-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023779944s
Jul 10 12:23:59.618: INFO: Pod "pod-37dea1be-c2a8-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026985983s
STEP: Saw pod success
Jul 10 12:23:59.618: INFO: Pod "pod-37dea1be-c2a8-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:23:59.621: INFO: Trying to get logs from node hunter-worker pod pod-37dea1be-c2a8-11ea-a406-0242ac11000f container test-container: 
STEP: delete the pod
Jul 10 12:23:59.809: INFO: Waiting for pod pod-37dea1be-c2a8-11ea-a406-0242ac11000f to disappear
Jul 10 12:23:59.820: INFO: Pod pod-37dea1be-c2a8-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:23:59.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-vvcxx" for this suite.
Jul 10 12:24:05.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:24:05.940: INFO: namespace: e2e-tests-emptydir-vvcxx, resource: bindings, ignored listing per whitelist
Jul 10 12:24:05.990: INFO: namespace e2e-tests-emptydir-vvcxx deletion completed in 6.166775232s

• [SLOW TEST:12.528 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:24:05.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul 10 12:24:14.730: INFO: Successfully updated pod "pod-update-activedeadlineseconds-3f629786-c2a8-11ea-a406-0242ac11000f"
Jul 10 12:24:14.730: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-3f629786-c2a8-11ea-a406-0242ac11000f" in namespace "e2e-tests-pods-frd5m" to be "terminated due to deadline exceeded"
Jul 10 12:24:14.772: INFO: Pod "pod-update-activedeadlineseconds-3f629786-c2a8-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 42.172922ms
Jul 10 12:24:17.416: INFO: Pod "pod-update-activedeadlineseconds-3f629786-c2a8-11ea-a406-0242ac11000f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.686023845s
Jul 10 12:24:17.416: INFO: Pod "pod-update-activedeadlineseconds-3f629786-c2a8-11ea-a406-0242ac11000f" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:24:17.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-frd5m" for this suite.
Jul 10 12:24:25.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:24:25.788: INFO: namespace: e2e-tests-pods-frd5m, resource: bindings, ignored listing per whitelist
Jul 10 12:24:25.790: INFO: namespace e2e-tests-pods-frd5m deletion completed in 8.372150518s

• [SLOW TEST:19.800 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:24:25.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 10 12:24:25.980: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b292bf0-c2a8-11ea-a406-0242ac11000f" in namespace "e2e-tests-downward-api-fljmv" to be "success or failure"
Jul 10 12:24:25.996: INFO: Pod "downwardapi-volume-4b292bf0-c2a8-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 15.316489ms
Jul 10 12:24:28.014: INFO: Pod "downwardapi-volume-4b292bf0-c2a8-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033939185s
Jul 10 12:24:30.019: INFO: Pod "downwardapi-volume-4b292bf0-c2a8-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038795175s
STEP: Saw pod success
Jul 10 12:24:30.019: INFO: Pod "downwardapi-volume-4b292bf0-c2a8-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:24:30.022: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-4b292bf0-c2a8-11ea-a406-0242ac11000f container client-container: 
STEP: delete the pod
Jul 10 12:24:30.394: INFO: Waiting for pod downwardapi-volume-4b292bf0-c2a8-11ea-a406-0242ac11000f to disappear
Jul 10 12:24:30.456: INFO: Pod downwardapi-volume-4b292bf0-c2a8-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:24:30.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-fljmv" for this suite.
Jul 10 12:24:36.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:24:36.487: INFO: namespace: e2e-tests-downward-api-fljmv, resource: bindings, ignored listing per whitelist
Jul 10 12:24:36.547: INFO: namespace e2e-tests-downward-api-fljmv deletion completed in 6.085861838s

• [SLOW TEST:10.756 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:24:36.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jul 10 12:24:41.359: INFO: Successfully updated pod "labelsupdate518a29a9-c2a8-11ea-a406-0242ac11000f"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:24:45.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tr2js" for this suite.
Jul 10 12:25:09.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:25:09.533: INFO: namespace: e2e-tests-projected-tr2js, resource: bindings, ignored listing per whitelist
Jul 10 12:25:09.560: INFO: namespace e2e-tests-projected-tr2js deletion completed in 24.082434078s

• [SLOW TEST:33.012 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:25:09.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 10 12:25:10.837: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jul 10 12:25:10.857: INFO: Number of nodes with available pods: 0
Jul 10 12:25:10.857: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jul 10 12:25:11.145: INFO: Number of nodes with available pods: 0
Jul 10 12:25:11.145: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:25:12.361: INFO: Number of nodes with available pods: 0
Jul 10 12:25:12.361: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:25:13.148: INFO: Number of nodes with available pods: 0
Jul 10 12:25:13.148: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:25:14.214: INFO: Number of nodes with available pods: 0
Jul 10 12:25:14.214: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:25:15.149: INFO: Number of nodes with available pods: 0
Jul 10 12:25:15.149: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:25:16.153: INFO: Number of nodes with available pods: 0
Jul 10 12:25:16.153: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:25:17.267: INFO: Number of nodes with available pods: 0
Jul 10 12:25:17.267: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:25:18.150: INFO: Number of nodes with available pods: 1
Jul 10 12:25:18.150: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jul 10 12:25:18.321: INFO: Number of nodes with available pods: 1
Jul 10 12:25:18.321: INFO: Number of running nodes: 0, number of available pods: 1
Jul 10 12:25:19.325: INFO: Number of nodes with available pods: 0
Jul 10 12:25:19.325: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jul 10 12:25:19.361: INFO: Number of nodes with available pods: 0
Jul 10 12:25:19.361: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:25:20.365: INFO: Number of nodes with available pods: 0
Jul 10 12:25:20.365: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:25:21.365: INFO: Number of nodes with available pods: 0
Jul 10 12:25:21.365: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:25:22.365: INFO: Number of nodes with available pods: 0
Jul 10 12:25:22.365: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:25:23.366: INFO: Number of nodes with available pods: 0
Jul 10 12:25:23.366: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:25:24.366: INFO: Number of nodes with available pods: 0
Jul 10 12:25:24.366: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:25:25.366: INFO: Number of nodes with available pods: 0
Jul 10 12:25:25.366: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:25:26.366: INFO: Number of nodes with available pods: 0
Jul 10 12:25:26.366: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:25:27.365: INFO: Number of nodes with available pods: 0
Jul 10 12:25:27.365: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:25:28.365: INFO: Number of nodes with available pods: 0
Jul 10 12:25:28.365: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:25:29.366: INFO: Number of nodes with available pods: 0
Jul 10 12:25:29.366: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:25:30.366: INFO: Number of nodes with available pods: 0
Jul 10 12:25:30.366: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:25:31.365: INFO: Number of nodes with available pods: 0
Jul 10 12:25:31.365: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:25:32.365: INFO: Number of nodes with available pods: 1
Jul 10 12:25:32.365: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-ppnnm, will wait for the garbage collector to delete the pods
Jul 10 12:25:32.429: INFO: Deleting DaemonSet.extensions daemon-set took: 6.524971ms
Jul 10 12:25:32.529: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.276129ms
Jul 10 12:25:47.732: INFO: Number of nodes with available pods: 0
Jul 10 12:25:47.732: INFO: Number of running nodes: 0, number of available pods: 0
Jul 10 12:25:47.734: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-ppnnm/daemonsets","resourceVersion":"24299"},"items":null}

Jul 10 12:25:47.736: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-ppnnm/pods","resourceVersion":"24299"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:25:48.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-ppnnm" for this suite.
Jul 10 12:25:58.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:25:58.682: INFO: namespace: e2e-tests-daemonsets-ppnnm, resource: bindings, ignored listing per whitelist
Jul 10 12:25:59.304: INFO: namespace e2e-tests-daemonsets-ppnnm deletion completed in 11.035143172s

• [SLOW TEST:49.744 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:25:59.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-2p6wg/secret-test-82f37384-c2a8-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume secrets
Jul 10 12:25:59.610: INFO: Waiting up to 5m0s for pod "pod-configmaps-82f3d19b-c2a8-11ea-a406-0242ac11000f" in namespace "e2e-tests-secrets-2p6wg" to be "success or failure"
Jul 10 12:25:59.693: INFO: Pod "pod-configmaps-82f3d19b-c2a8-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 82.6052ms
Jul 10 12:26:01.696: INFO: Pod "pod-configmaps-82f3d19b-c2a8-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085813552s
Jul 10 12:26:03.699: INFO: Pod "pod-configmaps-82f3d19b-c2a8-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08929329s
Jul 10 12:26:05.962: INFO: Pod "pod-configmaps-82f3d19b-c2a8-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.352245139s
Jul 10 12:26:07.999: INFO: Pod "pod-configmaps-82f3d19b-c2a8-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 8.388599252s
Jul 10 12:26:10.003: INFO: Pod "pod-configmaps-82f3d19b-c2a8-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.392597263s
STEP: Saw pod success
Jul 10 12:26:10.003: INFO: Pod "pod-configmaps-82f3d19b-c2a8-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:26:10.006: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-82f3d19b-c2a8-11ea-a406-0242ac11000f container env-test: 
STEP: delete the pod
Jul 10 12:26:10.522: INFO: Waiting for pod pod-configmaps-82f3d19b-c2a8-11ea-a406-0242ac11000f to disappear
Jul 10 12:26:10.579: INFO: Pod pod-configmaps-82f3d19b-c2a8-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:26:10.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2p6wg" for this suite.
Jul 10 12:26:18.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:26:18.797: INFO: namespace: e2e-tests-secrets-2p6wg, resource: bindings, ignored listing per whitelist
Jul 10 12:26:18.857: INFO: namespace e2e-tests-secrets-2p6wg deletion completed in 8.273670218s

• [SLOW TEST:19.553 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:26:18.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-8edb1ba9-c2a8-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume secrets
Jul 10 12:26:19.880: INFO: Waiting up to 5m0s for pod "pod-secrets-8f0b01fb-c2a8-11ea-a406-0242ac11000f" in namespace "e2e-tests-secrets-8fvmj" to be "success or failure"
Jul 10 12:26:20.076: INFO: Pod "pod-secrets-8f0b01fb-c2a8-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 196.04643ms
Jul 10 12:26:22.202: INFO: Pod "pod-secrets-8f0b01fb-c2a8-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321864431s
Jul 10 12:26:24.232: INFO: Pod "pod-secrets-8f0b01fb-c2a8-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.352159179s
Jul 10 12:26:26.235: INFO: Pod "pod-secrets-8f0b01fb-c2a8-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.355574296s
Jul 10 12:26:28.843: INFO: Pod "pod-secrets-8f0b01fb-c2a8-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 8.962900454s
Jul 10 12:26:30.986: INFO: Pod "pod-secrets-8f0b01fb-c2a8-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 11.106540056s
Jul 10 12:26:32.989: INFO: Pod "pod-secrets-8f0b01fb-c2a8-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.10942908s
STEP: Saw pod success
Jul 10 12:26:32.989: INFO: Pod "pod-secrets-8f0b01fb-c2a8-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:26:32.991: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-8f0b01fb-c2a8-11ea-a406-0242ac11000f container secret-volume-test: 
STEP: delete the pod
Jul 10 12:26:33.435: INFO: Waiting for pod pod-secrets-8f0b01fb-c2a8-11ea-a406-0242ac11000f to disappear
Jul 10 12:26:33.531: INFO: Pod pod-secrets-8f0b01fb-c2a8-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:26:33.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-8fvmj" for this suite.
Jul 10 12:26:43.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:26:43.621: INFO: namespace: e2e-tests-secrets-8fvmj, resource: bindings, ignored listing per whitelist
Jul 10 12:26:43.642: INFO: namespace e2e-tests-secrets-8fvmj deletion completed in 10.106836437s

• [SLOW TEST:24.785 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:26:43.643: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jul 10 12:26:44.108: INFO: Waiting up to 5m0s for pod "pod-9d810b3f-c2a8-11ea-a406-0242ac11000f" in namespace "e2e-tests-emptydir-sdmj8" to be "success or failure"
Jul 10 12:26:44.112: INFO: Pod "pod-9d810b3f-c2a8-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.719471ms
Jul 10 12:26:46.197: INFO: Pod "pod-9d810b3f-c2a8-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088575429s
Jul 10 12:26:48.200: INFO: Pod "pod-9d810b3f-c2a8-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092113613s
STEP: Saw pod success
Jul 10 12:26:48.200: INFO: Pod "pod-9d810b3f-c2a8-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:26:48.203: INFO: Trying to get logs from node hunter-worker pod pod-9d810b3f-c2a8-11ea-a406-0242ac11000f container test-container: 
STEP: delete the pod
Jul 10 12:26:48.347: INFO: Waiting for pod pod-9d810b3f-c2a8-11ea-a406-0242ac11000f to disappear
Jul 10 12:26:48.400: INFO: Pod pod-9d810b3f-c2a8-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:26:48.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-sdmj8" for this suite.
Jul 10 12:26:54.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:26:54.450: INFO: namespace: e2e-tests-emptydir-sdmj8, resource: bindings, ignored listing per whitelist
Jul 10 12:26:54.508: INFO: namespace e2e-tests-emptydir-sdmj8 deletion completed in 6.105073896s

• [SLOW TEST:10.865 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:26:54.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 10 12:26:55.404: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jul 10 12:26:55.408: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-gkl2z/daemonsets","resourceVersion":"24514"},"items":null}

Jul 10 12:26:55.410: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-gkl2z/pods","resourceVersion":"24514"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:26:55.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-gkl2z" for this suite.
Jul 10 12:27:01.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:27:01.535: INFO: namespace: e2e-tests-daemonsets-gkl2z, resource: bindings, ignored listing per whitelist
Jul 10 12:27:01.575: INFO: namespace e2e-tests-daemonsets-gkl2z deletion completed in 6.156745338s

S [SKIPPING] [7.067 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jul 10 12:26:55.404: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:27:01.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul 10 12:27:13.869: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul 10 12:27:13.886: INFO: Pod pod-with-poststart-http-hook still exists
Jul 10 12:27:15.886: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul 10 12:27:15.890: INFO: Pod pod-with-poststart-http-hook still exists
Jul 10 12:27:17.886: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul 10 12:27:17.898: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:27:17.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-vbp8q" for this suite.
Jul 10 12:27:41.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:27:42.038: INFO: namespace: e2e-tests-container-lifecycle-hook-vbp8q, resource: bindings, ignored listing per whitelist
Jul 10 12:27:42.049: INFO: namespace e2e-tests-container-lifecycle-hook-vbp8q deletion completed in 24.148157242s

• [SLOW TEST:40.474 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:27:42.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-c01eb131-c2a8-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume configMaps
Jul 10 12:27:42.203: INFO: Waiting up to 5m0s for pod "pod-configmaps-c0212f1b-c2a8-11ea-a406-0242ac11000f" in namespace "e2e-tests-configmap-8gvsh" to be "success or failure"
Jul 10 12:27:42.209: INFO: Pod "pod-configmaps-c0212f1b-c2a8-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.840518ms
Jul 10 12:27:44.233: INFO: Pod "pod-configmaps-c0212f1b-c2a8-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030168344s
Jul 10 12:27:46.236: INFO: Pod "pod-configmaps-c0212f1b-c2a8-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033196078s
Jul 10 12:27:49.156: INFO: Pod "pod-configmaps-c0212f1b-c2a8-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.952504786s
STEP: Saw pod success
Jul 10 12:27:49.156: INFO: Pod "pod-configmaps-c0212f1b-c2a8-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:27:49.159: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-c0212f1b-c2a8-11ea-a406-0242ac11000f container configmap-volume-test: 
STEP: delete the pod
Jul 10 12:27:49.336: INFO: Waiting for pod pod-configmaps-c0212f1b-c2a8-11ea-a406-0242ac11000f to disappear
Jul 10 12:27:49.429: INFO: Pod pod-configmaps-c0212f1b-c2a8-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:27:49.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-8gvsh" for this suite.
Jul 10 12:27:55.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:27:55.658: INFO: namespace: e2e-tests-configmap-8gvsh, resource: bindings, ignored listing per whitelist
Jul 10 12:27:55.877: INFO: namespace e2e-tests-configmap-8gvsh deletion completed in 6.44514176s

• [SLOW TEST:13.827 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:27:55.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jul 10 12:27:56.290: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:28:05.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-k6pjm" for this suite.
Jul 10 12:28:13.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:28:13.883: INFO: namespace: e2e-tests-init-container-k6pjm, resource: bindings, ignored listing per whitelist
Jul 10 12:28:13.901: INFO: namespace e2e-tests-init-container-k6pjm deletion completed in 8.325191169s

• [SLOW TEST:18.024 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:28:13.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-wg5zl
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Jul 10 12:28:14.169: INFO: Found 0 stateful pods, waiting for 3
Jul 10 12:28:27.024: INFO: Found 2 stateful pods, waiting for 3
Jul 10 12:28:34.391: INFO: Found 2 stateful pods, waiting for 3
Jul 10 12:28:44.174: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 10 12:28:44.174: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 10 12:28:44.174: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul 10 12:28:54.331: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 10 12:28:54.332: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 10 12:28:54.332: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jul 10 12:28:54.358: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jul 10 12:29:04.638: INFO: Updating stateful set ss2
Jul 10 12:29:04.891: INFO: Waiting for Pod e2e-tests-statefulset-wg5zl/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 10 12:29:14.933: INFO: Waiting for Pod e2e-tests-statefulset-wg5zl/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jul 10 12:29:29.586: INFO: Found 2 stateful pods, waiting for 3
Jul 10 12:29:39.738: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 10 12:29:39.738: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 10 12:29:39.738: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul 10 12:29:49.591: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 10 12:29:49.591: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 10 12:29:49.591: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jul 10 12:29:49.613: INFO: Updating stateful set ss2
Jul 10 12:29:49.649: INFO: Waiting for Pod e2e-tests-statefulset-wg5zl/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 10 12:29:59.669: INFO: Updating stateful set ss2
Jul 10 12:29:59.859: INFO: Waiting for StatefulSet e2e-tests-statefulset-wg5zl/ss2 to complete update
Jul 10 12:29:59.859: INFO: Waiting for Pod e2e-tests-statefulset-wg5zl/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 10 12:30:10.484: INFO: Waiting for StatefulSet e2e-tests-statefulset-wg5zl/ss2 to complete update
Jul 10 12:30:10.484: INFO: Waiting for Pod e2e-tests-statefulset-wg5zl/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 10 12:30:20.877: INFO: Waiting for StatefulSet e2e-tests-statefulset-wg5zl/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul 10 12:30:29.866: INFO: Deleting all statefulset in ns e2e-tests-statefulset-wg5zl
Jul 10 12:30:29.869: INFO: Scaling statefulset ss2 to 0
Jul 10 12:30:59.990: INFO: Waiting for statefulset status.replicas updated to 0
Jul 10 12:30:59.993: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:31:00.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-wg5zl" for this suite.
Jul 10 12:31:11.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:31:11.349: INFO: namespace: e2e-tests-statefulset-wg5zl, resource: bindings, ignored listing per whitelist
Jul 10 12:31:11.389: INFO: namespace e2e-tests-statefulset-wg5zl deletion completed in 10.450059627s

• [SLOW TEST:177.487 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:31:11.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 10 12:31:11.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-xrkd7'
Jul 10 12:31:22.251: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 10 12:31:22.251: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jul 10 12:31:26.047: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-pkbzn]
Jul 10 12:31:26.047: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-pkbzn" in namespace "e2e-tests-kubectl-xrkd7" to be "running and ready"
Jul 10 12:31:26.460: INFO: Pod "e2e-test-nginx-rc-pkbzn": Phase="Pending", Reason="", readiness=false. Elapsed: 413.239568ms
Jul 10 12:31:28.621: INFO: Pod "e2e-test-nginx-rc-pkbzn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.574192616s
Jul 10 12:31:30.656: INFO: Pod "e2e-test-nginx-rc-pkbzn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.609105887s
Jul 10 12:31:32.659: INFO: Pod "e2e-test-nginx-rc-pkbzn": Phase="Running", Reason="", readiness=true. Elapsed: 6.61221211s
Jul 10 12:31:32.659: INFO: Pod "e2e-test-nginx-rc-pkbzn" satisfied condition "running and ready"
Jul 10 12:31:32.659: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-pkbzn]
Jul 10 12:31:32.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-xrkd7'
Jul 10 12:31:32.775: INFO: stderr: ""
Jul 10 12:31:32.775: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Jul 10 12:31:32.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-xrkd7'
Jul 10 12:31:32.878: INFO: stderr: ""
Jul 10 12:31:32.878: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:31:32.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xrkd7" for this suite.
Jul 10 12:31:59.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:31:59.065: INFO: namespace: e2e-tests-kubectl-xrkd7, resource: bindings, ignored listing per whitelist
Jul 10 12:31:59.099: INFO: namespace e2e-tests-kubectl-xrkd7 deletion completed in 26.11788047s

• [SLOW TEST:47.710 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:31:59.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-b89w8 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-b89w8;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-b89w8 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-b89w8;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-b89w8.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-b89w8.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-b89w8.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-b89w8.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-b89w8.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-b89w8.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-b89w8.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b89w8.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-b89w8.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-b89w8.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-b89w8.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-b89w8.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-b89w8.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 112.125.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.125.112_udp@PTR;check="$$(dig +tcp +noall +answer +search 112.125.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.125.112_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-b89w8 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-b89w8;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-b89w8 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-b89w8;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-b89w8.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-b89w8.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-b89w8.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-b89w8.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-b89w8.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-b89w8.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-b89w8.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b89w8.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-b89w8.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-b89w8.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-b89w8.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-b89w8.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-b89w8.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 112.125.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.125.112_udp@PTR;check="$$(dig +tcp +noall +answer +search 112.125.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.125.112_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 10 12:32:41.018: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-b89w8/dns-test-595a8653-c2a9-11ea-a406-0242ac11000f: the server could not find the requested resource (get pods dns-test-595a8653-c2a9-11ea-a406-0242ac11000f)
Jul 10 12:32:41.319: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-b89w8 from pod e2e-tests-dns-b89w8/dns-test-595a8653-c2a9-11ea-a406-0242ac11000f: the server could not find the requested resource (get pods dns-test-595a8653-c2a9-11ea-a406-0242ac11000f)
Jul 10 12:32:41.346: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-b89w8/dns-test-595a8653-c2a9-11ea-a406-0242ac11000f: the server could not find the requested resource (get pods dns-test-595a8653-c2a9-11ea-a406-0242ac11000f)
Jul 10 12:32:41.348: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-b89w8/dns-test-595a8653-c2a9-11ea-a406-0242ac11000f: the server could not find the requested resource (get pods dns-test-595a8653-c2a9-11ea-a406-0242ac11000f)
Jul 10 12:32:41.350: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-b89w8 from pod e2e-tests-dns-b89w8/dns-test-595a8653-c2a9-11ea-a406-0242ac11000f: the server could not find the requested resource (get pods dns-test-595a8653-c2a9-11ea-a406-0242ac11000f)
Jul 10 12:32:41.353: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-b89w8 from pod e2e-tests-dns-b89w8/dns-test-595a8653-c2a9-11ea-a406-0242ac11000f: the server could not find the requested resource (get pods dns-test-595a8653-c2a9-11ea-a406-0242ac11000f)
Jul 10 12:32:41.356: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-b89w8.svc from pod e2e-tests-dns-b89w8/dns-test-595a8653-c2a9-11ea-a406-0242ac11000f: the server could not find the requested resource (get pods dns-test-595a8653-c2a9-11ea-a406-0242ac11000f)
Jul 10 12:32:41.358: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-b89w8.svc from pod e2e-tests-dns-b89w8/dns-test-595a8653-c2a9-11ea-a406-0242ac11000f: the server could not find the requested resource (get pods dns-test-595a8653-c2a9-11ea-a406-0242ac11000f)
Jul 10 12:32:41.361: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-b89w8.svc from pod e2e-tests-dns-b89w8/dns-test-595a8653-c2a9-11ea-a406-0242ac11000f: the server could not find the requested resource (get pods dns-test-595a8653-c2a9-11ea-a406-0242ac11000f)
Jul 10 12:32:41.364: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b89w8.svc from pod e2e-tests-dns-b89w8/dns-test-595a8653-c2a9-11ea-a406-0242ac11000f: the server could not find the requested resource (get pods dns-test-595a8653-c2a9-11ea-a406-0242ac11000f)
Jul 10 12:32:41.434: INFO: Lookups using e2e-tests-dns-b89w8/dns-test-595a8653-c2a9-11ea-a406-0242ac11000f failed for: [wheezy_udp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-b89w8 jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-b89w8 jessie_tcp@dns-test-service.e2e-tests-dns-b89w8 jessie_udp@dns-test-service.e2e-tests-dns-b89w8.svc jessie_tcp@dns-test-service.e2e-tests-dns-b89w8.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-b89w8.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b89w8.svc]

Jul 10 12:32:46.531: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b89w8.svc from pod e2e-tests-dns-b89w8/dns-test-595a8653-c2a9-11ea-a406-0242ac11000f: the server could not find the requested resource (get pods dns-test-595a8653-c2a9-11ea-a406-0242ac11000f)
Jul 10 12:32:46.545: INFO: Lookups using e2e-tests-dns-b89w8/dns-test-595a8653-c2a9-11ea-a406-0242ac11000f failed for: [jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-b89w8.svc]

Jul 10 12:32:51.521: INFO: DNS probes using e2e-tests-dns-b89w8/dns-test-595a8653-c2a9-11ea-a406-0242ac11000f succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:32:53.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-b89w8" for this suite.
Jul 10 12:32:59.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:32:59.494: INFO: namespace: e2e-tests-dns-b89w8, resource: bindings, ignored listing per whitelist
Jul 10 12:32:59.495: INFO: namespace e2e-tests-dns-b89w8 deletion completed in 6.085433283s

• [SLOW TEST:60.396 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:32:59.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-7d6784fc-c2a9-11ea-a406-0242ac11000f
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:33:09.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-m2vx4" for this suite.
Jul 10 12:33:35.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:33:36.401: INFO: namespace: e2e-tests-configmap-m2vx4, resource: bindings, ignored listing per whitelist
Jul 10 12:33:36.406: INFO: namespace e2e-tests-configmap-m2vx4 deletion completed in 26.485347286s

• [SLOW TEST:36.911 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:33:36.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jul 10 12:33:36.781: INFO: Waiting up to 5m0s for pod "var-expansion-936abc86-c2a9-11ea-a406-0242ac11000f" in namespace "e2e-tests-var-expansion-46nh9" to be "success or failure"
Jul 10 12:33:36.869: INFO: Pod "var-expansion-936abc86-c2a9-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 88.197974ms
Jul 10 12:33:39.125: INFO: Pod "var-expansion-936abc86-c2a9-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.344863992s
Jul 10 12:33:41.129: INFO: Pod "var-expansion-936abc86-c2a9-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.348031608s
Jul 10 12:33:43.275: INFO: Pod "var-expansion-936abc86-c2a9-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.494719689s
Jul 10 12:33:45.999: INFO: Pod "var-expansion-936abc86-c2a9-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.218577765s
Jul 10 12:33:48.003: INFO: Pod "var-expansion-936abc86-c2a9-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.222067798s
Jul 10 12:33:50.007: INFO: Pod "var-expansion-936abc86-c2a9-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 13.226167993s
Jul 10 12:33:52.009: INFO: Pod "var-expansion-936abc86-c2a9-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 15.228872871s
Jul 10 12:33:54.391: INFO: Pod "var-expansion-936abc86-c2a9-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.610170296s
STEP: Saw pod success
Jul 10 12:33:54.391: INFO: Pod "var-expansion-936abc86-c2a9-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:33:54.646: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-936abc86-c2a9-11ea-a406-0242ac11000f container dapi-container: 
STEP: delete the pod
Jul 10 12:33:54.733: INFO: Waiting for pod var-expansion-936abc86-c2a9-11ea-a406-0242ac11000f to disappear
Jul 10 12:33:54.819: INFO: Pod var-expansion-936abc86-c2a9-11ea-a406-0242ac11000f no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:33:54.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-46nh9" for this suite.
Jul 10 12:34:00.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:34:00.864: INFO: namespace: e2e-tests-var-expansion-46nh9, resource: bindings, ignored listing per whitelist
Jul 10 12:34:00.893: INFO: namespace e2e-tests-var-expansion-46nh9 deletion completed in 6.069922184s

• [SLOW TEST:24.487 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:34:00.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-lhs2b
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lhs2b to expose endpoints map[]
Jul 10 12:34:01.109: INFO: Get endpoints failed (17.234699ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jul 10 12:34:02.111: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lhs2b exposes endpoints map[] (1.019943032s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-lhs2b
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lhs2b to expose endpoints map[pod1:[80]]
Jul 10 12:34:06.836: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.720250138s elapsed, will retry)
Jul 10 12:34:09.934: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lhs2b exposes endpoints map[pod1:[80]] (7.818712332s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-lhs2b
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lhs2b to expose endpoints map[pod1:[80] pod2:[80]]
Jul 10 12:34:14.610: INFO: Unexpected endpoints: found map[a294cc0a-c2a9-11ea-b2c9-0242ac120008:[80]], expected map[pod1:[80] pod2:[80]] (4.672934829s elapsed, will retry)
Jul 10 12:34:18.090: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lhs2b exposes endpoints map[pod2:[80] pod1:[80]] (8.152507667s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-lhs2b
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lhs2b to expose endpoints map[pod2:[80]]
Jul 10 12:34:18.144: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lhs2b exposes endpoints map[pod2:[80]] (50.271944ms elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-lhs2b
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-lhs2b to expose endpoints map[]
Jul 10 12:34:19.610: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-lhs2b exposes endpoints map[] (1.463828712s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:34:21.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-lhs2b" for this suite.
Jul 10 12:34:29.778: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:34:29.845: INFO: namespace: e2e-tests-services-lhs2b, resource: bindings, ignored listing per whitelist
Jul 10 12:34:29.852: INFO: namespace e2e-tests-services-lhs2b deletion completed in 8.480603816s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:28.959 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:34:29.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-9gmg8 in namespace e2e-tests-proxy-x47pd
I0710 12:34:30.461256       6 runners.go:184] Created replication controller with name: proxy-service-9gmg8, namespace: e2e-tests-proxy-x47pd, replica count: 1
I0710 12:34:31.511713       6 runners.go:184] proxy-service-9gmg8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0710 12:34:32.511915       6 runners.go:184] proxy-service-9gmg8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0710 12:34:33.512089       6 runners.go:184] proxy-service-9gmg8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0710 12:34:34.512328       6 runners.go:184] proxy-service-9gmg8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0710 12:34:35.512531       6 runners.go:184] proxy-service-9gmg8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0710 12:34:36.512854       6 runners.go:184] proxy-service-9gmg8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0710 12:34:37.513056       6 runners.go:184] proxy-service-9gmg8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0710 12:34:38.513295       6 runners.go:184] proxy-service-9gmg8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0710 12:34:39.513507       6 runners.go:184] proxy-service-9gmg8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0710 12:34:40.513714       6 runners.go:184] proxy-service-9gmg8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0710 12:34:41.513922       6 runners.go:184] proxy-service-9gmg8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0710 12:34:42.514129       6 runners.go:184] proxy-service-9gmg8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0710 12:34:43.514345       6 runners.go:184] proxy-service-9gmg8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0710 12:34:44.514544       6 runners.go:184] proxy-service-9gmg8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0710 12:34:45.514768       6 runners.go:184] proxy-service-9gmg8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0710 12:34:46.514989       6 runners.go:184] proxy-service-9gmg8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0710 12:34:47.515267       6 runners.go:184] proxy-service-9gmg8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0710 12:34:48.515509       6 runners.go:184] proxy-service-9gmg8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0710 12:34:49.515906       6 runners.go:184] proxy-service-9gmg8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0710 12:34:50.516111       6 runners.go:184] proxy-service-9gmg8 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0710 12:34:51.516308       6 runners.go:184] proxy-service-9gmg8 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 10 12:34:51.519: INFO: setup took 21.384478535s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jul 10 12:34:51.525: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-x47pd/pods/proxy-service-9gmg8-d98qm:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jul 10 12:35:07.831: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-sxcrj,SelfLink:/api/v1/namespaces/e2e-tests-watch-sxcrj/configmaps/e2e-watch-test-configmap-a,UID:c9bfc2aa-c2a9-11ea-b2c9-0242ac120008,ResourceVersion:25976,Generation:0,CreationTimestamp:2020-07-10 12:35:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 10 12:35:07.831: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-sxcrj,SelfLink:/api/v1/namespaces/e2e-tests-watch-sxcrj/configmaps/e2e-watch-test-configmap-a,UID:c9bfc2aa-c2a9-11ea-b2c9-0242ac120008,ResourceVersion:25976,Generation:0,CreationTimestamp:2020-07-10 12:35:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jul 10 12:35:18.057: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-sxcrj,SelfLink:/api/v1/namespaces/e2e-tests-watch-sxcrj/configmaps/e2e-watch-test-configmap-a,UID:c9bfc2aa-c2a9-11ea-b2c9-0242ac120008,ResourceVersion:25995,Generation:0,CreationTimestamp:2020-07-10 12:35:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jul 10 12:35:18.057: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-sxcrj,SelfLink:/api/v1/namespaces/e2e-tests-watch-sxcrj/configmaps/e2e-watch-test-configmap-a,UID:c9bfc2aa-c2a9-11ea-b2c9-0242ac120008,ResourceVersion:25995,Generation:0,CreationTimestamp:2020-07-10 12:35:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jul 10 12:35:28.065: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-sxcrj,SelfLink:/api/v1/namespaces/e2e-tests-watch-sxcrj/configmaps/e2e-watch-test-configmap-a,UID:c9bfc2aa-c2a9-11ea-b2c9-0242ac120008,ResourceVersion:26015,Generation:0,CreationTimestamp:2020-07-10 12:35:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 10 12:35:28.065: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-sxcrj,SelfLink:/api/v1/namespaces/e2e-tests-watch-sxcrj/configmaps/e2e-watch-test-configmap-a,UID:c9bfc2aa-c2a9-11ea-b2c9-0242ac120008,ResourceVersion:26015,Generation:0,CreationTimestamp:2020-07-10 12:35:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jul 10 12:35:39.121: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-sxcrj,SelfLink:/api/v1/namespaces/e2e-tests-watch-sxcrj/configmaps/e2e-watch-test-configmap-a,UID:c9bfc2aa-c2a9-11ea-b2c9-0242ac120008,ResourceVersion:26036,Generation:0,CreationTimestamp:2020-07-10 12:35:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 10 12:35:39.121: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-sxcrj,SelfLink:/api/v1/namespaces/e2e-tests-watch-sxcrj/configmaps/e2e-watch-test-configmap-a,UID:c9bfc2aa-c2a9-11ea-b2c9-0242ac120008,ResourceVersion:26036,Generation:0,CreationTimestamp:2020-07-10 12:35:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jul 10 12:35:49.378: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-sxcrj,SelfLink:/api/v1/namespaces/e2e-tests-watch-sxcrj/configmaps/e2e-watch-test-configmap-b,UID:e25d17ec-c2a9-11ea-b2c9-0242ac120008,ResourceVersion:26053,Generation:0,CreationTimestamp:2020-07-10 12:35:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 10 12:35:49.379: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-sxcrj,SelfLink:/api/v1/namespaces/e2e-tests-watch-sxcrj/configmaps/e2e-watch-test-configmap-b,UID:e25d17ec-c2a9-11ea-b2c9-0242ac120008,ResourceVersion:26053,Generation:0,CreationTimestamp:2020-07-10 12:35:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jul 10 12:35:59.384: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-sxcrj,SelfLink:/api/v1/namespaces/e2e-tests-watch-sxcrj/configmaps/e2e-watch-test-configmap-b,UID:e25d17ec-c2a9-11ea-b2c9-0242ac120008,ResourceVersion:26071,Generation:0,CreationTimestamp:2020-07-10 12:35:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 10 12:35:59.384: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-sxcrj,SelfLink:/api/v1/namespaces/e2e-tests-watch-sxcrj/configmaps/e2e-watch-test-configmap-b,UID:e25d17ec-c2a9-11ea-b2c9-0242ac120008,ResourceVersion:26071,Generation:0,CreationTimestamp:2020-07-10 12:35:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:36:09.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-sxcrj" for this suite.
Jul 10 12:36:15.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:36:15.654: INFO: namespace: e2e-tests-watch-sxcrj, resource: bindings, ignored listing per whitelist
Jul 10 12:36:15.660: INFO: namespace e2e-tests-watch-sxcrj deletion completed in 6.270362659s

• [SLOW TEST:67.991 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:36:15.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 10 12:36:16.429: INFO: Creating deployment "test-recreate-deployment"
Jul 10 12:36:16.478: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jul 10 12:36:16.518: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jul 10 12:36:18.657: INFO: Waiting deployment "test-recreate-deployment" to complete
Jul 10 12:36:18.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729981376, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729981376, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729981376, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729981376, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 10 12:36:20.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729981376, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729981376, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729981376, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729981376, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 10 12:36:22.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729981376, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729981376, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729981376, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729981376, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 10 12:36:24.698: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729981376, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729981376, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729981376, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729981376, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 10 12:36:26.662: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jul 10 12:36:26.670: INFO: Updating deployment test-recreate-deployment
Jul 10 12:36:26.670: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul 10 12:36:29.667: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-frhws,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-frhws/deployments/test-recreate-deployment,UID:f2a3baae-c2a9-11ea-b2c9-0242ac120008,ResourceVersion:26173,Generation:2,CreationTimestamp:2020-07-10 12:36:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-07-10 12:36:26 +0000 UTC 2020-07-10 12:36:26 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-07-10 12:36:27 +0000 UTC 2020-07-10 12:36:16 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jul 10 12:36:29.671: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-frhws,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-frhws/replicasets/test-recreate-deployment-589c4bfd,UID:f8e42e3d-c2a9-11ea-b2c9-0242ac120008,ResourceVersion:26171,Generation:1,CreationTimestamp:2020-07-10 12:36:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment f2a3baae-c2a9-11ea-b2c9-0242ac120008 0xc00161adff 0xc00161ae10}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul 10 12:36:29.671: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jul 10 12:36:29.672: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-frhws,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-frhws/replicasets/test-recreate-deployment-5bf7f65dc,UID:f2b127b1-c2a9-11ea-b2c9-0242ac120008,ResourceVersion:26160,Generation:2,CreationTimestamp:2020-07-10 12:36:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment f2a3baae-c2a9-11ea-b2c9-0242ac120008 0xc00161aed0 0xc00161aed1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul 10 12:36:29.711: INFO: Pod "test-recreate-deployment-589c4bfd-h9qd6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-h9qd6,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-frhws,SelfLink:/api/v1/namespaces/e2e-tests-deployment-frhws/pods/test-recreate-deployment-589c4bfd-h9qd6,UID:f8e488c9-c2a9-11ea-b2c9-0242ac120008,ResourceVersion:26170,Generation:0,CreationTimestamp:2020-07-10 12:36:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd f8e42e3d-c2a9-11ea-b2c9-0242ac120008 0xc00161b90f 0xc00161b920}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xc4xd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xc4xd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xc4xd true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00161be30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00161bfa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 12:36:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-10 12:36:27 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 12:36:26 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-07-10 12:36:27 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:36:29.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-frhws" for this suite.
Jul 10 12:36:42.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:36:42.072: INFO: namespace: e2e-tests-deployment-frhws, resource: bindings, ignored listing per whitelist
Jul 10 12:36:42.088: INFO: namespace e2e-tests-deployment-frhws deletion completed in 12.372826582s

• [SLOW TEST:26.428 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:36:42.089: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-0253af7e-c2aa-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume configMaps
Jul 10 12:36:42.794: INFO: Waiting up to 5m0s for pod "pod-configmaps-025963f1-c2aa-11ea-a406-0242ac11000f" in namespace "e2e-tests-configmap-qnm4r" to be "success or failure"
Jul 10 12:36:43.942: INFO: Pod "pod-configmaps-025963f1-c2aa-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.147980167s
Jul 10 12:36:46.636: INFO: Pod "pod-configmaps-025963f1-c2aa-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.842388594s
Jul 10 12:36:49.382: INFO: Pod "pod-configmaps-025963f1-c2aa-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.588186185s
Jul 10 12:36:51.386: INFO: Pod "pod-configmaps-025963f1-c2aa-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.592189091s
Jul 10 12:36:53.390: INFO: Pod "pod-configmaps-025963f1-c2aa-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.596361623s
Jul 10 12:36:55.397: INFO: Pod "pod-configmaps-025963f1-c2aa-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.602580811s
STEP: Saw pod success
Jul 10 12:36:55.397: INFO: Pod "pod-configmaps-025963f1-c2aa-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:36:55.400: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-025963f1-c2aa-11ea-a406-0242ac11000f container configmap-volume-test: 
STEP: delete the pod
Jul 10 12:36:56.157: INFO: Waiting for pod pod-configmaps-025963f1-c2aa-11ea-a406-0242ac11000f to disappear
Jul 10 12:36:56.667: INFO: Pod pod-configmaps-025963f1-c2aa-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:36:56.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-qnm4r" for this suite.
Jul 10 12:37:05.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:37:07.042: INFO: namespace: e2e-tests-configmap-qnm4r, resource: bindings, ignored listing per whitelist
Jul 10 12:37:07.063: INFO: namespace e2e-tests-configmap-qnm4r deletion completed in 10.391721546s

• [SLOW TEST:24.975 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:37:07.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 10 12:37:08.096: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 10 12:37:18.909: INFO: Waiting up to 5m0s for pod "downwardapi-volume-17db9a62-c2aa-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-tbnr4" to be "success or failure"
Jul 10 12:37:18.990: INFO: Pod "downwardapi-volume-17db9a62-c2aa-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 81.231195ms
Jul 10 12:37:21.403: INFO: Pod "downwardapi-volume-17db9a62-c2aa-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.49431793s
Jul 10 12:37:23.895: INFO: Pod "downwardapi-volume-17db9a62-c2aa-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.985913285s
Jul 10 12:37:27.814: INFO: Pod "downwardapi-volume-17db9a62-c2aa-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.904923382s
Jul 10 12:37:30.062: INFO: Pod "downwardapi-volume-17db9a62-c2aa-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 11.152978968s
Jul 10 12:37:32.156: INFO: Pod "downwardapi-volume-17db9a62-c2aa-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.246914429s
STEP: Saw pod success
Jul 10 12:37:32.156: INFO: Pod "downwardapi-volume-17db9a62-c2aa-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:37:32.160: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-17db9a62-c2aa-11ea-a406-0242ac11000f container client-container: 
STEP: delete the pod
Jul 10 12:37:32.496: INFO: Waiting for pod downwardapi-volume-17db9a62-c2aa-11ea-a406-0242ac11000f to disappear
Jul 10 12:37:32.846: INFO: Pod downwardapi-volume-17db9a62-c2aa-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:37:32.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-tbnr4" for this suite.
Jul 10 12:37:41.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:37:41.435: INFO: namespace: e2e-tests-projected-tbnr4, resource: bindings, ignored listing per whitelist
Jul 10 12:37:41.456: INFO: namespace e2e-tests-projected-tbnr4 deletion completed in 8.606312813s

• [SLOW TEST:22.723 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:37:41.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-7sgfj
Jul 10 12:37:50.649: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-7sgfj
STEP: checking the pod's current state and verifying that restartCount is present
Jul 10 12:37:50.653: INFO: Initial restart count of pod liveness-exec is 0
Jul 10 12:38:38.004: INFO: Restart count of pod e2e-tests-container-probe-7sgfj/liveness-exec is now 1 (47.351770741s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:38:38.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-7sgfj" for this suite.
Jul 10 12:38:44.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:38:44.157: INFO: namespace: e2e-tests-container-probe-7sgfj, resource: bindings, ignored listing per whitelist
Jul 10 12:38:44.157: INFO: namespace e2e-tests-container-probe-7sgfj deletion completed in 6.109723446s

• [SLOW TEST:62.701 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:38:44.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-4acee780-c2aa-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume secrets
Jul 10 12:38:44.370: INFO: Waiting up to 5m0s for pod "pod-secrets-4acf6894-c2aa-11ea-a406-0242ac11000f" in namespace "e2e-tests-secrets-dh4xv" to be "success or failure"
Jul 10 12:38:44.413: INFO: Pod "pod-secrets-4acf6894-c2aa-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 42.850615ms
Jul 10 12:38:46.417: INFO: Pod "pod-secrets-4acf6894-c2aa-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047160822s
Jul 10 12:38:48.421: INFO: Pod "pod-secrets-4acf6894-c2aa-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051244702s
Jul 10 12:38:50.501: INFO: Pod "pod-secrets-4acf6894-c2aa-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.130625254s
STEP: Saw pod success
Jul 10 12:38:50.501: INFO: Pod "pod-secrets-4acf6894-c2aa-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:38:50.504: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-4acf6894-c2aa-11ea-a406-0242ac11000f container secret-volume-test: 
STEP: delete the pod
Jul 10 12:38:50.556: INFO: Waiting for pod pod-secrets-4acf6894-c2aa-11ea-a406-0242ac11000f to disappear
Jul 10 12:38:50.584: INFO: Pod pod-secrets-4acf6894-c2aa-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:38:50.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-dh4xv" for this suite.
Jul 10 12:38:56.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:38:56.646: INFO: namespace: e2e-tests-secrets-dh4xv, resource: bindings, ignored listing per whitelist
Jul 10 12:38:56.730: INFO: namespace e2e-tests-secrets-dh4xv deletion completed in 6.143288592s

• [SLOW TEST:12.572 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:38:56.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-stbw
STEP: Creating a pod to test atomic-volume-subpath
Jul 10 12:38:57.665: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-stbw" in namespace "e2e-tests-subpath-sqz2n" to be "success or failure"
Jul 10 12:38:57.680: INFO: Pod "pod-subpath-test-configmap-stbw": Phase="Pending", Reason="", readiness=false. Elapsed: 15.105431ms
Jul 10 12:38:59.684: INFO: Pod "pod-subpath-test-configmap-stbw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018413463s
Jul 10 12:39:01.775: INFO: Pod "pod-subpath-test-configmap-stbw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109849425s
Jul 10 12:39:03.778: INFO: Pod "pod-subpath-test-configmap-stbw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112918147s
Jul 10 12:39:05.785: INFO: Pod "pod-subpath-test-configmap-stbw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119761004s
Jul 10 12:39:07.788: INFO: Pod "pod-subpath-test-configmap-stbw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.122763477s
Jul 10 12:39:09.807: INFO: Pod "pod-subpath-test-configmap-stbw": Phase="Running", Reason="", readiness=false. Elapsed: 12.141667822s
Jul 10 12:39:11.810: INFO: Pod "pod-subpath-test-configmap-stbw": Phase="Running", Reason="", readiness=false. Elapsed: 14.144697877s
Jul 10 12:39:13.814: INFO: Pod "pod-subpath-test-configmap-stbw": Phase="Running", Reason="", readiness=false. Elapsed: 16.148946354s
Jul 10 12:39:15.817: INFO: Pod "pod-subpath-test-configmap-stbw": Phase="Running", Reason="", readiness=false. Elapsed: 18.152207638s
Jul 10 12:39:17.843: INFO: Pod "pod-subpath-test-configmap-stbw": Phase="Running", Reason="", readiness=false. Elapsed: 20.177668973s
Jul 10 12:39:19.846: INFO: Pod "pod-subpath-test-configmap-stbw": Phase="Running", Reason="", readiness=false. Elapsed: 22.180474818s
Jul 10 12:39:22.033: INFO: Pod "pod-subpath-test-configmap-stbw": Phase="Running", Reason="", readiness=false. Elapsed: 24.368161488s
Jul 10 12:39:24.037: INFO: Pod "pod-subpath-test-configmap-stbw": Phase="Running", Reason="", readiness=false. Elapsed: 26.372152225s
Jul 10 12:39:26.440: INFO: Pod "pod-subpath-test-configmap-stbw": Phase="Running", Reason="", readiness=false. Elapsed: 28.775216854s
Jul 10 12:39:28.444: INFO: Pod "pod-subpath-test-configmap-stbw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.779047209s
STEP: Saw pod success
Jul 10 12:39:28.444: INFO: Pod "pod-subpath-test-configmap-stbw" satisfied condition "success or failure"
Jul 10 12:39:28.447: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-stbw container test-container-subpath-configmap-stbw: 
STEP: delete the pod
Jul 10 12:39:28.633: INFO: Waiting for pod pod-subpath-test-configmap-stbw to disappear
Jul 10 12:39:28.693: INFO: Pod pod-subpath-test-configmap-stbw no longer exists
STEP: Deleting pod pod-subpath-test-configmap-stbw
Jul 10 12:39:28.693: INFO: Deleting pod "pod-subpath-test-configmap-stbw" in namespace "e2e-tests-subpath-sqz2n"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:39:28.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-sqz2n" for this suite.
Jul 10 12:39:36.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:39:36.952: INFO: namespace: e2e-tests-subpath-sqz2n, resource: bindings, ignored listing per whitelist
Jul 10 12:39:36.958: INFO: namespace e2e-tests-subpath-sqz2n deletion completed in 8.260355495s

• [SLOW TEST:40.228 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:39:36.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0710 12:40:07.680909       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 10 12:40:07.680: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:40:07.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-sbt9r" for this suite.
Jul 10 12:40:13.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:40:13.823: INFO: namespace: e2e-tests-gc-sbt9r, resource: bindings, ignored listing per whitelist
Jul 10 12:40:13.845: INFO: namespace e2e-tests-gc-sbt9r deletion completed in 6.161784842s

• [SLOW TEST:36.886 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:40:13.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-807c380f-c2aa-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume configMaps
Jul 10 12:40:14.555: INFO: Waiting up to 5m0s for pod "pod-configmaps-8084a0f6-c2aa-11ea-a406-0242ac11000f" in namespace "e2e-tests-configmap-jcnqb" to be "success or failure"
Jul 10 12:40:14.557: INFO: Pod "pod-configmaps-8084a0f6-c2aa-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135308ms
Jul 10 12:40:16.627: INFO: Pod "pod-configmaps-8084a0f6-c2aa-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072725443s
Jul 10 12:40:18.711: INFO: Pod "pod-configmaps-8084a0f6-c2aa-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 4.156352413s
Jul 10 12:40:20.843: INFO: Pod "pod-configmaps-8084a0f6-c2aa-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.288743261s
STEP: Saw pod success
Jul 10 12:40:20.843: INFO: Pod "pod-configmaps-8084a0f6-c2aa-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:40:20.851: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-8084a0f6-c2aa-11ea-a406-0242ac11000f container configmap-volume-test: 
STEP: delete the pod
Jul 10 12:40:20.865: INFO: Waiting for pod pod-configmaps-8084a0f6-c2aa-11ea-a406-0242ac11000f to disappear
Jul 10 12:40:20.869: INFO: Pod pod-configmaps-8084a0f6-c2aa-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:40:20.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-jcnqb" for this suite.
Jul 10 12:40:26.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:40:26.934: INFO: namespace: e2e-tests-configmap-jcnqb, resource: bindings, ignored listing per whitelist
Jul 10 12:40:26.993: INFO: namespace e2e-tests-configmap-jcnqb deletion completed in 6.12037006s

• [SLOW TEST:13.148 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:40:26.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jul 10 12:40:27.121: INFO: Waiting up to 5m0s for pod "client-containers-880ce635-c2aa-11ea-a406-0242ac11000f" in namespace "e2e-tests-containers-4x4bs" to be "success or failure"
Jul 10 12:40:27.154: INFO: Pod "client-containers-880ce635-c2aa-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 32.760877ms
Jul 10 12:40:29.286: INFO: Pod "client-containers-880ce635-c2aa-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164675207s
Jul 10 12:40:31.290: INFO: Pod "client-containers-880ce635-c2aa-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.168704592s
STEP: Saw pod success
Jul 10 12:40:31.290: INFO: Pod "client-containers-880ce635-c2aa-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:40:31.293: INFO: Trying to get logs from node hunter-worker pod client-containers-880ce635-c2aa-11ea-a406-0242ac11000f container test-container: 
STEP: delete the pod
Jul 10 12:40:31.361: INFO: Waiting for pod client-containers-880ce635-c2aa-11ea-a406-0242ac11000f to disappear
Jul 10 12:40:31.489: INFO: Pod client-containers-880ce635-c2aa-11ea-a406-0242ac11000f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:40:31.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-4x4bs" for this suite.
Jul 10 12:40:37.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:40:37.631: INFO: namespace: e2e-tests-containers-4x4bs, resource: bindings, ignored listing per whitelist
Jul 10 12:40:37.637: INFO: namespace e2e-tests-containers-4x4bs deletion completed in 6.143910924s

• [SLOW TEST:10.644 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:40:37.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-hsn25
Jul 10 12:40:41.842: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-hsn25
STEP: checking the pod's current state and verifying that restartCount is present
Jul 10 12:40:41.844: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:44:42.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-hsn25" for this suite.
Jul 10 12:44:48.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:44:48.522: INFO: namespace: e2e-tests-container-probe-hsn25, resource: bindings, ignored listing per whitelist
Jul 10 12:44:48.522: INFO: namespace e2e-tests-container-probe-hsn25 deletion completed in 6.229181489s

• [SLOW TEST:250.885 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:44:48.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-76wb5
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 10 12:44:48.591: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 10 12:45:26.965: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.180:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-76wb5 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 10 12:45:26.965: INFO: >>> kubeConfig: /root/.kube/config
I0710 12:45:26.996291       6 log.go:172] (0xc000a68e70) (0xc002404a00) Create stream
I0710 12:45:26.996324       6 log.go:172] (0xc000a68e70) (0xc002404a00) Stream added, broadcasting: 1
I0710 12:45:26.999317       6 log.go:172] (0xc000a68e70) Reply frame received for 1
I0710 12:45:26.999414       6 log.go:172] (0xc000a68e70) (0xc001de05a0) Create stream
I0710 12:45:26.999450       6 log.go:172] (0xc000a68e70) (0xc001de05a0) Stream added, broadcasting: 3
I0710 12:45:27.000719       6 log.go:172] (0xc000a68e70) Reply frame received for 3
I0710 12:45:27.000860       6 log.go:172] (0xc000a68e70) (0xc00228a000) Create stream
I0710 12:45:27.000878       6 log.go:172] (0xc000a68e70) (0xc00228a000) Stream added, broadcasting: 5
I0710 12:45:27.001890       6 log.go:172] (0xc000a68e70) Reply frame received for 5
I0710 12:45:27.059478       6 log.go:172] (0xc000a68e70) Data frame received for 3
I0710 12:45:27.059508       6 log.go:172] (0xc001de05a0) (3) Data frame handling
I0710 12:45:27.059521       6 log.go:172] (0xc001de05a0) (3) Data frame sent
I0710 12:45:27.059527       6 log.go:172] (0xc000a68e70) Data frame received for 3
I0710 12:45:27.059532       6 log.go:172] (0xc001de05a0) (3) Data frame handling
I0710 12:45:27.059618       6 log.go:172] (0xc000a68e70) Data frame received for 5
I0710 12:45:27.059631       6 log.go:172] (0xc00228a000) (5) Data frame handling
I0710 12:45:27.061123       6 log.go:172] (0xc000a68e70) Data frame received for 1
I0710 12:45:27.061142       6 log.go:172] (0xc002404a00) (1) Data frame handling
I0710 12:45:27.061157       6 log.go:172] (0xc002404a00) (1) Data frame sent
I0710 12:45:27.061172       6 log.go:172] (0xc000a68e70) (0xc002404a00) Stream removed, broadcasting: 1
I0710 12:45:27.061230       6 log.go:172] (0xc000a68e70) (0xc002404a00) Stream removed, broadcasting: 1
I0710 12:45:27.061248       6 log.go:172] (0xc000a68e70) (0xc001de05a0) Stream removed, broadcasting: 3
I0710 12:45:27.061261       6 log.go:172] (0xc000a68e70) (0xc00228a000) Stream removed, broadcasting: 5
Jul 10 12:45:27.061: INFO: Found all expected endpoints: [netserver-0]
I0710 12:45:27.061456       6 log.go:172] (0xc000a68e70) Go away received
Jul 10 12:45:27.063: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.171:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-76wb5 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 10 12:45:27.063: INFO: >>> kubeConfig: /root/.kube/config
I0710 12:45:27.097665       6 log.go:172] (0xc000a69340) (0xc002404c80) Create stream
I0710 12:45:27.097694       6 log.go:172] (0xc000a69340) (0xc002404c80) Stream added, broadcasting: 1
I0710 12:45:27.099682       6 log.go:172] (0xc000a69340) Reply frame received for 1
I0710 12:45:27.099737       6 log.go:172] (0xc000a69340) (0xc002404d20) Create stream
I0710 12:45:27.099770       6 log.go:172] (0xc000a69340) (0xc002404d20) Stream added, broadcasting: 3
I0710 12:45:27.100548       6 log.go:172] (0xc000a69340) Reply frame received for 3
I0710 12:45:27.100583       6 log.go:172] (0xc000a69340) (0xc0017a80a0) Create stream
I0710 12:45:27.100596       6 log.go:172] (0xc000a69340) (0xc0017a80a0) Stream added, broadcasting: 5
I0710 12:45:27.101486       6 log.go:172] (0xc000a69340) Reply frame received for 5
I0710 12:45:27.153096       6 log.go:172] (0xc000a69340) Data frame received for 5
I0710 12:45:27.153141       6 log.go:172] (0xc0017a80a0) (5) Data frame handling
I0710 12:45:27.153182       6 log.go:172] (0xc000a69340) Data frame received for 3
I0710 12:45:27.153192       6 log.go:172] (0xc002404d20) (3) Data frame handling
I0710 12:45:27.153203       6 log.go:172] (0xc002404d20) (3) Data frame sent
I0710 12:45:27.153212       6 log.go:172] (0xc000a69340) Data frame received for 3
I0710 12:45:27.153225       6 log.go:172] (0xc002404d20) (3) Data frame handling
I0710 12:45:27.154966       6 log.go:172] (0xc000a69340) Data frame received for 1
I0710 12:45:27.154991       6 log.go:172] (0xc002404c80) (1) Data frame handling
I0710 12:45:27.155003       6 log.go:172] (0xc002404c80) (1) Data frame sent
I0710 12:45:27.155017       6 log.go:172] (0xc000a69340) (0xc002404c80) Stream removed, broadcasting: 1
I0710 12:45:27.155030       6 log.go:172] (0xc000a69340) Go away received
I0710 12:45:27.155168       6 log.go:172] (0xc000a69340) (0xc002404c80) Stream removed, broadcasting: 1
I0710 12:45:27.155206       6 log.go:172] (0xc000a69340) (0xc002404d20) Stream removed, broadcasting: 3
I0710 12:45:27.155246       6 log.go:172] (0xc000a69340) (0xc0017a80a0) Stream removed, broadcasting: 5
Jul 10 12:45:27.155: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:45:27.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-76wb5" for this suite.
Jul 10 12:45:53.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:45:53.212: INFO: namespace: e2e-tests-pod-network-test-76wb5, resource: bindings, ignored listing per whitelist
Jul 10 12:45:53.242: INFO: namespace e2e-tests-pod-network-test-76wb5 deletion completed in 26.083881093s

• [SLOW TEST:64.720 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:45:53.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:45:54.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-vnlhx" for this suite.
Jul 10 12:46:18.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:46:18.306: INFO: namespace: e2e-tests-pods-vnlhx, resource: bindings, ignored listing per whitelist
Jul 10 12:46:18.365: INFO: namespace e2e-tests-pods-vnlhx deletion completed in 24.157019638s

• [SLOW TEST:25.122 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:46:18.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jul 10 12:46:23.026: INFO: Successfully updated pod "labelsupdate5979829b-c2ab-11ea-a406-0242ac11000f"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:46:27.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-sdkgq" for this suite.
Jul 10 12:46:49.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:46:49.115: INFO: namespace: e2e-tests-downward-api-sdkgq, resource: bindings, ignored listing per whitelist
Jul 10 12:46:49.174: INFO: namespace e2e-tests-downward-api-sdkgq deletion completed in 22.097982672s

• [SLOW TEST:30.810 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:46:49.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-k2tmg
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 10 12:46:49.278: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 10 12:47:15.443: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostName&protocol=udp&host=10.244.2.183&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-k2tmg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 10 12:47:15.443: INFO: >>> kubeConfig: /root/.kube/config
I0710 12:47:15.472424       6 log.go:172] (0xc000cc6580) (0xc0025463c0) Create stream
I0710 12:47:15.472460       6 log.go:172] (0xc000cc6580) (0xc0025463c0) Stream added, broadcasting: 1
I0710 12:47:15.474079       6 log.go:172] (0xc000cc6580) Reply frame received for 1
I0710 12:47:15.474124       6 log.go:172] (0xc000cc6580) (0xc00130a0a0) Create stream
I0710 12:47:15.474144       6 log.go:172] (0xc000cc6580) (0xc00130a0a0) Stream added, broadcasting: 3
I0710 12:47:15.474892       6 log.go:172] (0xc000cc6580) Reply frame received for 3
I0710 12:47:15.474964       6 log.go:172] (0xc000cc6580) (0xc002546500) Create stream
I0710 12:47:15.474981       6 log.go:172] (0xc000cc6580) (0xc002546500) Stream added, broadcasting: 5
I0710 12:47:15.475845       6 log.go:172] (0xc000cc6580) Reply frame received for 5
I0710 12:47:15.555868       6 log.go:172] (0xc000cc6580) Data frame received for 3
I0710 12:47:15.555904       6 log.go:172] (0xc00130a0a0) (3) Data frame handling
I0710 12:47:15.555935       6 log.go:172] (0xc00130a0a0) (3) Data frame sent
I0710 12:47:15.556264       6 log.go:172] (0xc000cc6580) Data frame received for 5
I0710 12:47:15.556295       6 log.go:172] (0xc002546500) (5) Data frame handling
I0710 12:47:15.556317       6 log.go:172] (0xc000cc6580) Data frame received for 3
I0710 12:47:15.556327       6 log.go:172] (0xc00130a0a0) (3) Data frame handling
I0710 12:47:15.557949       6 log.go:172] (0xc000cc6580) Data frame received for 1
I0710 12:47:15.557967       6 log.go:172] (0xc0025463c0) (1) Data frame handling
I0710 12:47:15.557975       6 log.go:172] (0xc0025463c0) (1) Data frame sent
I0710 12:47:15.557984       6 log.go:172] (0xc000cc6580) (0xc0025463c0) Stream removed, broadcasting: 1
I0710 12:47:15.558061       6 log.go:172] (0xc000cc6580) (0xc0025463c0) Stream removed, broadcasting: 1
I0710 12:47:15.558071       6 log.go:172] (0xc000cc6580) (0xc00130a0a0) Stream removed, broadcasting: 3
I0710 12:47:15.558081       6 log.go:172] (0xc000cc6580) (0xc002546500) Stream removed, broadcasting: 5
I0710 12:47:15.558097       6 log.go:172] (0xc000cc6580) Go away received
Jul 10 12:47:15.558: INFO: Waiting for endpoints: map[]
Jul 10 12:47:15.561: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.184:8080/dial?request=hostName&protocol=udp&host=10.244.1.173&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-k2tmg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 10 12:47:15.561: INFO: >>> kubeConfig: /root/.kube/config
I0710 12:47:15.595986       6 log.go:172] (0xc001430370) (0xc00130a320) Create stream
I0710 12:47:15.596005       6 log.go:172] (0xc001430370) (0xc00130a320) Stream added, broadcasting: 1
I0710 12:47:15.598098       6 log.go:172] (0xc001430370) Reply frame received for 1
I0710 12:47:15.598144       6 log.go:172] (0xc001430370) (0xc000a86000) Create stream
I0710 12:47:15.598166       6 log.go:172] (0xc001430370) (0xc000a86000) Stream added, broadcasting: 3
I0710 12:47:15.599090       6 log.go:172] (0xc001430370) Reply frame received for 3
I0710 12:47:15.599140       6 log.go:172] (0xc001430370) (0xc0029ce140) Create stream
I0710 12:47:15.599155       6 log.go:172] (0xc001430370) (0xc0029ce140) Stream added, broadcasting: 5
I0710 12:47:15.599980       6 log.go:172] (0xc001430370) Reply frame received for 5
I0710 12:47:15.653383       6 log.go:172] (0xc001430370) Data frame received for 3
I0710 12:47:15.653406       6 log.go:172] (0xc000a86000) (3) Data frame handling
I0710 12:47:15.653423       6 log.go:172] (0xc000a86000) (3) Data frame sent
I0710 12:47:15.653711       6 log.go:172] (0xc001430370) Data frame received for 3
I0710 12:47:15.653729       6 log.go:172] (0xc000a86000) (3) Data frame handling
I0710 12:47:15.653854       6 log.go:172] (0xc001430370) Data frame received for 5
I0710 12:47:15.653876       6 log.go:172] (0xc0029ce140) (5) Data frame handling
I0710 12:47:15.655292       6 log.go:172] (0xc001430370) Data frame received for 1
I0710 12:47:15.655326       6 log.go:172] (0xc00130a320) (1) Data frame handling
I0710 12:47:15.655357       6 log.go:172] (0xc00130a320) (1) Data frame sent
I0710 12:47:15.655434       6 log.go:172] (0xc001430370) (0xc00130a320) Stream removed, broadcasting: 1
I0710 12:47:15.655521       6 log.go:172] (0xc001430370) (0xc00130a320) Stream removed, broadcasting: 1
I0710 12:47:15.655539       6 log.go:172] (0xc001430370) (0xc000a86000) Stream removed, broadcasting: 3
I0710 12:47:15.655552       6 log.go:172] (0xc001430370) (0xc0029ce140) Stream removed, broadcasting: 5
I0710 12:47:15.655564       6 log.go:172] (0xc001430370) Go away received
Jul 10 12:47:15.655: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:47:15.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-k2tmg" for this suite.
Jul 10 12:47:39.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:47:39.725: INFO: namespace: e2e-tests-pod-network-test-k2tmg, resource: bindings, ignored listing per whitelist
Jul 10 12:47:39.739: INFO: namespace e2e-tests-pod-network-test-k2tmg deletion completed in 24.080936037s

• [SLOW TEST:50.564 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:47:39.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 10 12:47:39.843: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:47:40.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-r8nvc" for this suite.
Jul 10 12:47:47.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:47:47.336: INFO: namespace: e2e-tests-custom-resource-definition-r8nvc, resource: bindings, ignored listing per whitelist
Jul 10 12:47:47.599: INFO: namespace e2e-tests-custom-resource-definition-r8nvc deletion completed in 6.699579599s

• [SLOW TEST:7.859 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:47:47.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:47:54.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-x9xk5" for this suite.
Jul 10 12:48:16.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:48:16.951: INFO: namespace: e2e-tests-replication-controller-x9xk5, resource: bindings, ignored listing per whitelist
Jul 10 12:48:17.014: INFO: namespace e2e-tests-replication-controller-x9xk5 deletion completed in 22.147381425s

• [SLOW TEST:29.415 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:48:17.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jul 10 12:48:17.119: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fg2r9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fg2r9/configmaps/e2e-watch-test-label-changed,UID:a0307632-c2ab-11ea-b2c9-0242ac120008,ResourceVersion:28050,Generation:0,CreationTimestamp:2020-07-10 12:48:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 10 12:48:17.119: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fg2r9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fg2r9/configmaps/e2e-watch-test-label-changed,UID:a0307632-c2ab-11ea-b2c9-0242ac120008,ResourceVersion:28051,Generation:0,CreationTimestamp:2020-07-10 12:48:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jul 10 12:48:17.119: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fg2r9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fg2r9/configmaps/e2e-watch-test-label-changed,UID:a0307632-c2ab-11ea-b2c9-0242ac120008,ResourceVersion:28052,Generation:0,CreationTimestamp:2020-07-10 12:48:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jul 10 12:48:27.162: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fg2r9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fg2r9/configmaps/e2e-watch-test-label-changed,UID:a0307632-c2ab-11ea-b2c9-0242ac120008,ResourceVersion:28073,Generation:0,CreationTimestamp:2020-07-10 12:48:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 10 12:48:27.162: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fg2r9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fg2r9/configmaps/e2e-watch-test-label-changed,UID:a0307632-c2ab-11ea-b2c9-0242ac120008,ResourceVersion:28074,Generation:0,CreationTimestamp:2020-07-10 12:48:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jul 10 12:48:27.162: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-fg2r9,SelfLink:/api/v1/namespaces/e2e-tests-watch-fg2r9/configmaps/e2e-watch-test-label-changed,UID:a0307632-c2ab-11ea-b2c9-0242ac120008,ResourceVersion:28075,Generation:0,CreationTimestamp:2020-07-10 12:48:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:48:27.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-fg2r9" for this suite.
Jul 10 12:48:33.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:48:33.220: INFO: namespace: e2e-tests-watch-fg2r9, resource: bindings, ignored listing per whitelist
Jul 10 12:48:33.249: INFO: namespace e2e-tests-watch-fg2r9 deletion completed in 6.08461566s

• [SLOW TEST:16.235 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:48:33.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-b2sll
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 10 12:48:33.371: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 10 12:48:57.454: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.186 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-b2sll PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 10 12:48:57.454: INFO: >>> kubeConfig: /root/.kube/config
I0710 12:48:57.483957       6 log.go:172] (0xc000e7a420) (0xc001c8a6e0) Create stream
I0710 12:48:57.483991       6 log.go:172] (0xc000e7a420) (0xc001c8a6e0) Stream added, broadcasting: 1
I0710 12:48:57.487485       6 log.go:172] (0xc000e7a420) Reply frame received for 1
I0710 12:48:57.487524       6 log.go:172] (0xc000e7a420) (0xc001c8a780) Create stream
I0710 12:48:57.487538       6 log.go:172] (0xc000e7a420) (0xc001c8a780) Stream added, broadcasting: 3
I0710 12:48:57.488419       6 log.go:172] (0xc000e7a420) Reply frame received for 3
I0710 12:48:57.488480       6 log.go:172] (0xc000e7a420) (0xc001ef7e00) Create stream
I0710 12:48:57.488493       6 log.go:172] (0xc000e7a420) (0xc001ef7e00) Stream added, broadcasting: 5
I0710 12:48:57.489457       6 log.go:172] (0xc000e7a420) Reply frame received for 5
I0710 12:48:58.565879       6 log.go:172] (0xc000e7a420) Data frame received for 3
I0710 12:48:58.565952       6 log.go:172] (0xc001c8a780) (3) Data frame handling
I0710 12:48:58.565975       6 log.go:172] (0xc001c8a780) (3) Data frame sent
I0710 12:48:58.565984       6 log.go:172] (0xc000e7a420) Data frame received for 3
I0710 12:48:58.565993       6 log.go:172] (0xc001c8a780) (3) Data frame handling
I0710 12:48:58.566006       6 log.go:172] (0xc000e7a420) Data frame received for 5
I0710 12:48:58.566021       6 log.go:172] (0xc001ef7e00) (5) Data frame handling
I0710 12:48:58.567778       6 log.go:172] (0xc000e7a420) Data frame received for 1
I0710 12:48:58.567797       6 log.go:172] (0xc001c8a6e0) (1) Data frame handling
I0710 12:48:58.567818       6 log.go:172] (0xc001c8a6e0) (1) Data frame sent
I0710 12:48:58.567830       6 log.go:172] (0xc000e7a420) (0xc001c8a6e0) Stream removed, broadcasting: 1
I0710 12:48:58.567903       6 log.go:172] (0xc000e7a420) (0xc001c8a6e0) Stream removed, broadcasting: 1
I0710 12:48:58.567915       6 log.go:172] (0xc000e7a420) (0xc001c8a780) Stream removed, broadcasting: 3
I0710 12:48:58.568052       6 log.go:172] (0xc000e7a420) Go away received
I0710 12:48:58.568231       6 log.go:172] (0xc000e7a420) (0xc001ef7e00) Stream removed, broadcasting: 5
Jul 10 12:48:58.568: INFO: Found all expected endpoints: [netserver-0]
Jul 10 12:48:58.582: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.174 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-b2sll PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 10 12:48:58.582: INFO: >>> kubeConfig: /root/.kube/config
I0710 12:48:58.610066       6 log.go:172] (0xc000e7a8f0) (0xc001c8abe0) Create stream
I0710 12:48:58.610102       6 log.go:172] (0xc000e7a8f0) (0xc001c8abe0) Stream added, broadcasting: 1
I0710 12:48:58.618904       6 log.go:172] (0xc000e7a8f0) Reply frame received for 1
I0710 12:48:58.618944       6 log.go:172] (0xc000e7a8f0) (0xc001ef6000) Create stream
I0710 12:48:58.618956       6 log.go:172] (0xc000e7a8f0) (0xc001ef6000) Stream added, broadcasting: 3
I0710 12:48:58.619635       6 log.go:172] (0xc000e7a8f0) Reply frame received for 3
I0710 12:48:58.619658       6 log.go:172] (0xc000e7a8f0) (0xc001de00a0) Create stream
I0710 12:48:58.619666       6 log.go:172] (0xc000e7a8f0) (0xc001de00a0) Stream added, broadcasting: 5
I0710 12:48:58.620270       6 log.go:172] (0xc000e7a8f0) Reply frame received for 5
I0710 12:48:59.686662       6 log.go:172] (0xc000e7a8f0) Data frame received for 3
I0710 12:48:59.686702       6 log.go:172] (0xc001ef6000) (3) Data frame handling
I0710 12:48:59.686718       6 log.go:172] (0xc001ef6000) (3) Data frame sent
I0710 12:48:59.686779       6 log.go:172] (0xc000e7a8f0) Data frame received for 5
I0710 12:48:59.686899       6 log.go:172] (0xc001de00a0) (5) Data frame handling
I0710 12:48:59.686966       6 log.go:172] (0xc000e7a8f0) Data frame received for 3
I0710 12:48:59.686997       6 log.go:172] (0xc001ef6000) (3) Data frame handling
I0710 12:48:59.689096       6 log.go:172] (0xc000e7a8f0) Data frame received for 1
I0710 12:48:59.689138       6 log.go:172] (0xc001c8abe0) (1) Data frame handling
I0710 12:48:59.689171       6 log.go:172] (0xc001c8abe0) (1) Data frame sent
I0710 12:48:59.689195       6 log.go:172] (0xc000e7a8f0) (0xc001c8abe0) Stream removed, broadcasting: 1
I0710 12:48:59.689226       6 log.go:172] (0xc000e7a8f0) Go away received
I0710 12:48:59.689385       6 log.go:172] (0xc000e7a8f0) (0xc001c8abe0) Stream removed, broadcasting: 1
I0710 12:48:59.689422       6 log.go:172] (0xc000e7a8f0) (0xc001ef6000) Stream removed, broadcasting: 3
I0710 12:48:59.689442       6 log.go:172] (0xc000e7a8f0) (0xc001de00a0) Stream removed, broadcasting: 5
Jul 10 12:48:59.689: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:48:59.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-b2sll" for this suite.
Jul 10 12:49:24.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:49:24.467: INFO: namespace: e2e-tests-pod-network-test-b2sll, resource: bindings, ignored listing per whitelist
Jul 10 12:49:24.487: INFO: namespace e2e-tests-pod-network-test-b2sll deletion completed in 24.39065503s

• [SLOW TEST:51.237 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:49:24.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 10 12:49:24.596: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jul 10 12:49:29.621: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul 10 12:49:29.621: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul 10 12:49:29.662: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-nc97x,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nc97x/deployments/test-cleanup-deployment,UID:cb6bbf8f-c2ab-11ea-b2c9-0242ac120008,ResourceVersion:28284,Generation:1,CreationTimestamp:2020-07-10 12:49:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jul 10 12:49:29.821: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Jul 10 12:49:29.821: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jul 10 12:49:29.821: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-nc97x,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-nc97x/replicasets/test-cleanup-controller,UID:c868dd25-c2ab-11ea-b2c9-0242ac120008,ResourceVersion:28285,Generation:1,CreationTimestamp:2020-07-10 12:49:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment cb6bbf8f-c2ab-11ea-b2c9-0242ac120008 0xc001fcf607 0xc001fcf608}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jul 10 12:49:29.866: INFO: Pod "test-cleanup-controller-57mrm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-57mrm,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-nc97x,SelfLink:/api/v1/namespaces/e2e-tests-deployment-nc97x/pods/test-cleanup-controller-57mrm,UID:c870decb-c2ab-11ea-b2c9-0242ac120008,ResourceVersion:28278,Generation:0,CreationTimestamp:2020-07-10 12:49:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller c868dd25-c2ab-11ea-b2c9-0242ac120008 0xc001d042c7 0xc001d042c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rfkvf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rfkvf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-rfkvf true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d04340} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d04370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 12:49:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 12:49:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 12:49:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-10 12:49:24 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.176,StartTime:2020-07-10 12:49:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-10 12:49:27 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://90775b709ffb60e38e370c4973330e37088a2bfea67910c670eba9bdd60dc4c2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:49:29.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-nc97x" for this suite.
Jul 10 12:49:36.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:49:36.162: INFO: namespace: e2e-tests-deployment-nc97x, resource: bindings, ignored listing per whitelist
Jul 10 12:49:36.193: INFO: namespace e2e-tests-deployment-nc97x deletion completed in 6.150201553s

• [SLOW TEST:11.706 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:49:36.194: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-cf834249-c2ab-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume configMaps
Jul 10 12:49:36.500: INFO: Waiting up to 5m0s for pod "pod-configmaps-cf83fede-c2ab-11ea-a406-0242ac11000f" in namespace "e2e-tests-configmap-97qfq" to be "success or failure"
Jul 10 12:49:36.502: INFO: Pod "pod-configmaps-cf83fede-c2ab-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.811596ms
Jul 10 12:49:38.507: INFO: Pod "pod-configmaps-cf83fede-c2ab-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007087347s
Jul 10 12:49:40.511: INFO: Pod "pod-configmaps-cf83fede-c2ab-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010975556s
STEP: Saw pod success
Jul 10 12:49:40.511: INFO: Pod "pod-configmaps-cf83fede-c2ab-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:49:40.513: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-cf83fede-c2ab-11ea-a406-0242ac11000f container configmap-volume-test: 
STEP: delete the pod
Jul 10 12:49:40.690: INFO: Waiting for pod pod-configmaps-cf83fede-c2ab-11ea-a406-0242ac11000f to disappear
Jul 10 12:49:40.706: INFO: Pod pod-configmaps-cf83fede-c2ab-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:49:40.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-97qfq" for this suite.
Jul 10 12:49:46.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:49:46.882: INFO: namespace: e2e-tests-configmap-97qfq, resource: bindings, ignored listing per whitelist
Jul 10 12:49:46.950: INFO: namespace e2e-tests-configmap-97qfq deletion completed in 6.240842511s

• [SLOW TEST:10.756 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:49:46.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jul 10 12:49:55.196: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 10 12:49:55.246: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 10 12:49:57.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 10 12:49:57.249: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 10 12:49:59.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 10 12:49:59.250: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 10 12:50:01.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 10 12:50:01.250: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 10 12:50:03.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 10 12:50:03.263: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 10 12:50:05.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 10 12:50:05.249: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 10 12:50:07.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 10 12:50:07.278: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 10 12:50:09.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 10 12:50:09.269: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 10 12:50:11.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 10 12:50:11.282: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 10 12:50:13.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 10 12:50:13.250: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 10 12:50:15.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 10 12:50:15.288: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 10 12:50:17.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 10 12:50:17.251: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 10 12:50:19.246: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 10 12:50:19.250: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:50:19.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-tc8gf" for this suite.
Jul 10 12:50:43.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:50:43.381: INFO: namespace: e2e-tests-container-lifecycle-hook-tc8gf, resource: bindings, ignored listing per whitelist
Jul 10 12:50:43.419: INFO: namespace e2e-tests-container-lifecycle-hook-tc8gf deletion completed in 24.11791135s

• [SLOW TEST:56.469 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:50:43.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jul 10 12:50:43.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-67bcl'
Jul 10 12:50:47.267: INFO: stderr: ""
Jul 10 12:50:47.267: INFO: stdout: "pod/pause created\n"
Jul 10 12:50:47.267: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jul 10 12:50:47.267: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-67bcl" to be "running and ready"
Jul 10 12:50:47.294: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 26.580346ms
Jul 10 12:50:49.298: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031126469s
Jul 10 12:50:51.302: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.034896657s
Jul 10 12:50:51.302: INFO: Pod "pause" satisfied condition "running and ready"
Jul 10 12:50:51.302: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jul 10 12:50:51.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-67bcl'
Jul 10 12:50:51.422: INFO: stderr: ""
Jul 10 12:50:51.422: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jul 10 12:50:51.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-67bcl'
Jul 10 12:50:51.549: INFO: stderr: ""
Jul 10 12:50:51.549: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jul 10 12:50:51.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-67bcl'
Jul 10 12:50:51.651: INFO: stderr: ""
Jul 10 12:50:51.651: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jul 10 12:50:51.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-67bcl'
Jul 10 12:50:51.743: INFO: stderr: ""
Jul 10 12:50:51.743: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jul 10 12:50:51.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-67bcl'
Jul 10 12:50:51.878: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 10 12:50:51.878: INFO: stdout: "pod \"pause\" force deleted\n"
Jul 10 12:50:51.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-67bcl'
Jul 10 12:50:51.982: INFO: stderr: "No resources found.\n"
Jul 10 12:50:51.982: INFO: stdout: ""
Jul 10 12:50:51.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-67bcl -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 10 12:50:52.080: INFO: stderr: ""
Jul 10 12:50:52.080: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:50:52.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-67bcl" for this suite.
Jul 10 12:50:58.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:50:58.115: INFO: namespace: e2e-tests-kubectl-67bcl, resource: bindings, ignored listing per whitelist
Jul 10 12:50:58.176: INFO: namespace e2e-tests-kubectl-67bcl deletion completed in 6.092168586s

• [SLOW TEST:14.757 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:50:58.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 10 12:50:58.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Jul 10 12:50:58.337: INFO: stderr: ""
Jul 10 12:50:58.337: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-07-10T10:25:27Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Jul 10 12:50:58.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ppwps'
Jul 10 12:50:58.589: INFO: stderr: ""
Jul 10 12:50:58.589: INFO: stdout: "replicationcontroller/redis-master created\n"
Jul 10 12:50:58.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ppwps'
Jul 10 12:50:58.936: INFO: stderr: ""
Jul 10 12:50:58.936: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Jul 10 12:50:59.953: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 12:50:59.953: INFO: Found 0 / 1
Jul 10 12:51:00.983: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 12:51:00.983: INFO: Found 0 / 1
Jul 10 12:51:01.942: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 12:51:01.942: INFO: Found 0 / 1
Jul 10 12:51:02.941: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 12:51:02.941: INFO: Found 1 / 1
Jul 10 12:51:02.941: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul 10 12:51:02.945: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 12:51:02.945: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul 10 12:51:02.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-lnj7s --namespace=e2e-tests-kubectl-ppwps'
Jul 10 12:51:03.061: INFO: stderr: ""
Jul 10 12:51:03.061: INFO: stdout: "Name:               redis-master-lnj7s\nNamespace:          e2e-tests-kubectl-ppwps\nPriority:           0\nPriorityClassName:  \nNode:               hunter-worker2/172.18.0.2\nStart Time:         Fri, 10 Jul 2020 12:50:58 +0000\nLabels:             app=redis\n                    role=master\nAnnotations:        \nStatus:             Running\nIP:                 10.244.1.179\nControlled By:      ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://6f766cf4e3eaaf172ea7d073a95128d41fa065550997556a77f14df79e98339f\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 10 Jul 2020 12:51:02 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-sf6m8 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-sf6m8:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-sf6m8\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                     Message\n  ----    ------     ----  ----                     -------\n  Normal  Scheduled  5s    default-scheduler        Successfully assigned e2e-tests-kubectl-ppwps/redis-master-lnj7s to hunter-worker2\n  Normal  Pulled     4s    kubelet, hunter-worker2  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, hunter-worker2  Created container\n  Normal  Started    1s    kubelet, hunter-worker2  Started container\n"
Jul 10 12:51:03.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-ppwps'
Jul 10 12:51:03.185: INFO: stderr: ""
Jul 10 12:51:03.186: INFO: stdout: "Name:         redis-master\nNamespace:    e2e-tests-kubectl-ppwps\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: redis-master-lnj7s\n"
Jul 10 12:51:03.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-ppwps'
Jul 10 12:51:03.296: INFO: stderr: ""
Jul 10 12:51:03.296: INFO: stdout: "Name:              redis-master\nNamespace:         e2e-tests-kubectl-ppwps\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.108.149.119\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.1.179:6379\nSession Affinity:  None\nEvents:            \n"
Jul 10 12:51:03.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane'
Jul 10 12:51:03.424: INFO: stderr: ""
Jul 10 12:51:03.424: INFO: stdout: "Name:               hunter-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/hostname=hunter-control-plane\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Fri, 10 Jul 2020 10:22:18 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Fri, 10 Jul 2020 12:50:53 +0000   Fri, 10 Jul 2020 10:22:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Fri, 10 Jul 2020 12:50:53 +0000   Fri, 10 Jul 2020 10:22:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Fri, 10 Jul 2020 12:50:53 +0000   Fri, 10 Jul 2020 10:22:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Fri, 10 Jul 2020 12:50:53 +0000   Fri, 10 Jul 2020 10:23:08 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.8\n  Hostname:    hunter-control-plane\nCapacity:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nSystem Info:\n Machine ID:                 86b921187bcd42a69301f53c2d21b8f0\n System UUID:                dbd65bbc-7a27-4b36-b69e-be53f27cba5c\n Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version:             4.15.0-109-generic\n OS Image:                   Ubuntu 19.10\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.3.3-14-g449e9269\n Kubelet Version:            v1.13.12\n Kube-Proxy Version:         v1.13.12\nPodCIDR:                     10.244.0.0/24\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                            ------------  ----------  ---------------  -------------  ---\n  kube-system                coredns-54ff9cd656-46fs4                        100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     148m\n  kube-system                coredns-54ff9cd656-gzt7d                        100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     148m\n  kube-system                etcd-hunter-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         147m\n  kube-system                kindnet-r4bfs                                   100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      148m\n  kube-system                kube-apiserver-hunter-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         147m\n  kube-system                kube-controller-manager-hunter-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         147m\n  kube-system                kube-proxy-4jv56                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         148m\n  kube-system                kube-scheduler-hunter-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         147m\n  local-path-storage         local-path-provisioner-674595c7-jw5rw           0 (0%)        0 (0%)      0 (0%)           0 (0%)         148m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Jul 10 12:51:03.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-ppwps'
Jul 10 12:51:03.532: INFO: stderr: ""
Jul 10 12:51:03.532: INFO: stdout: "Name:         e2e-tests-kubectl-ppwps\nLabels:       e2e-framework=kubectl\n              e2e-run=09d24322-c29b-11ea-a406-0242ac11000f\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:51:03.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ppwps" for this suite.
Jul 10 12:51:25.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:51:25.596: INFO: namespace: e2e-tests-kubectl-ppwps, resource: bindings, ignored listing per whitelist
Jul 10 12:51:25.655: INFO: namespace e2e-tests-kubectl-ppwps deletion completed in 22.119834603s

• [SLOW TEST:27.479 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:51:25.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Jul 10 12:51:25.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-78fgt'
Jul 10 12:51:26.079: INFO: stderr: ""
Jul 10 12:51:26.079: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Jul 10 12:51:27.083: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 12:51:27.083: INFO: Found 0 / 1
Jul 10 12:51:28.121: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 12:51:28.121: INFO: Found 0 / 1
Jul 10 12:51:29.103: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 12:51:29.103: INFO: Found 0 / 1
Jul 10 12:51:30.083: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 12:51:30.083: INFO: Found 0 / 1
Jul 10 12:51:31.084: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 12:51:31.084: INFO: Found 1 / 1
Jul 10 12:51:31.084: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul 10 12:51:31.087: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 12:51:31.087: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jul 10 12:51:31.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-gvfb2 redis-master --namespace=e2e-tests-kubectl-78fgt'
Jul 10 12:51:31.201: INFO: stderr: ""
Jul 10 12:51:31.201: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 10 Jul 12:51:29.514 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 10 Jul 12:51:29.514 # Server started, Redis version 3.2.12\n1:M 10 Jul 12:51:29.514 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 10 Jul 12:51:29.514 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jul 10 12:51:31.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gvfb2 redis-master --namespace=e2e-tests-kubectl-78fgt --tail=1'
Jul 10 12:51:31.313: INFO: stderr: ""
Jul 10 12:51:31.313: INFO: stdout: "1:M 10 Jul 12:51:29.514 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jul 10 12:51:31.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gvfb2 redis-master --namespace=e2e-tests-kubectl-78fgt --limit-bytes=1'
Jul 10 12:51:31.406: INFO: stderr: ""
Jul 10 12:51:31.406: INFO: stdout: " "
STEP: exposing timestamps
Jul 10 12:51:31.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gvfb2 redis-master --namespace=e2e-tests-kubectl-78fgt --tail=1 --timestamps'
Jul 10 12:51:31.506: INFO: stderr: ""
Jul 10 12:51:31.506: INFO: stdout: "2020-07-10T12:51:29.514606198Z 1:M 10 Jul 12:51:29.514 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jul 10 12:51:34.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gvfb2 redis-master --namespace=e2e-tests-kubectl-78fgt --since=1s'
Jul 10 12:51:34.111: INFO: stderr: ""
Jul 10 12:51:34.111: INFO: stdout: ""
Jul 10 12:51:34.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-gvfb2 redis-master --namespace=e2e-tests-kubectl-78fgt --since=24h'
Jul 10 12:51:34.220: INFO: stderr: ""
Jul 10 12:51:34.220: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 10 Jul 12:51:29.514 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 10 Jul 12:51:29.514 # Server started, Redis version 3.2.12\n1:M 10 Jul 12:51:29.514 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 10 Jul 12:51:29.514 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Jul 10 12:51:34.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-78fgt'
Jul 10 12:51:34.405: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 10 12:51:34.405: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jul 10 12:51:34.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-78fgt'
Jul 10 12:51:34.493: INFO: stderr: "No resources found.\n"
Jul 10 12:51:34.493: INFO: stdout: ""
Jul 10 12:51:34.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-78fgt -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 10 12:51:34.610: INFO: stderr: ""
Jul 10 12:51:34.610: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:51:34.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-78fgt" for this suite.
Jul 10 12:51:56.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:51:56.882: INFO: namespace: e2e-tests-kubectl-78fgt, resource: bindings, ignored listing per whitelist
Jul 10 12:51:56.933: INFO: namespace e2e-tests-kubectl-78fgt deletion completed in 22.318784268s

• [SLOW TEST:31.279 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:51:56.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-234865fc-c2ac-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume secrets
Jul 10 12:51:57.039: INFO: Waiting up to 5m0s for pod "pod-secrets-2348c337-c2ac-11ea-a406-0242ac11000f" in namespace "e2e-tests-secrets-sw7tp" to be "success or failure"
Jul 10 12:51:57.044: INFO: Pod "pod-secrets-2348c337-c2ac-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.544268ms
Jul 10 12:51:59.047: INFO: Pod "pod-secrets-2348c337-c2ac-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008143925s
Jul 10 12:52:01.050: INFO: Pod "pod-secrets-2348c337-c2ac-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011148966s
STEP: Saw pod success
Jul 10 12:52:01.050: INFO: Pod "pod-secrets-2348c337-c2ac-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:52:01.052: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-2348c337-c2ac-11ea-a406-0242ac11000f container secret-volume-test: 
STEP: delete the pod
Jul 10 12:52:01.118: INFO: Waiting for pod pod-secrets-2348c337-c2ac-11ea-a406-0242ac11000f to disappear
Jul 10 12:52:01.126: INFO: Pod pod-secrets-2348c337-c2ac-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:52:01.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-sw7tp" for this suite.
Jul 10 12:52:07.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:52:07.186: INFO: namespace: e2e-tests-secrets-sw7tp, resource: bindings, ignored listing per whitelist
Jul 10 12:52:07.227: INFO: namespace e2e-tests-secrets-sw7tp deletion completed in 6.09796079s

• [SLOW TEST:10.293 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:52:07.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 10 12:52:07.352: INFO: Waiting up to 5m0s for pod "downwardapi-volume-296e49ee-c2ac-11ea-a406-0242ac11000f" in namespace "e2e-tests-downward-api-psbsg" to be "success or failure"
Jul 10 12:52:07.384: INFO: Pod "downwardapi-volume-296e49ee-c2ac-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 32.706778ms
Jul 10 12:52:09.439: INFO: Pod "downwardapi-volume-296e49ee-c2ac-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087383819s
Jul 10 12:52:11.444: INFO: Pod "downwardapi-volume-296e49ee-c2ac-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091789704s
STEP: Saw pod success
Jul 10 12:52:11.444: INFO: Pod "downwardapi-volume-296e49ee-c2ac-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:52:11.446: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-296e49ee-c2ac-11ea-a406-0242ac11000f container client-container: 
STEP: delete the pod
Jul 10 12:52:11.477: INFO: Waiting for pod downwardapi-volume-296e49ee-c2ac-11ea-a406-0242ac11000f to disappear
Jul 10 12:52:11.543: INFO: Pod downwardapi-volume-296e49ee-c2ac-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:52:11.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-psbsg" for this suite.
Jul 10 12:52:17.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:52:17.761: INFO: namespace: e2e-tests-downward-api-psbsg, resource: bindings, ignored listing per whitelist
Jul 10 12:52:17.889: INFO: namespace e2e-tests-downward-api-psbsg deletion completed in 6.220980685s

• [SLOW TEST:10.661 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:52:17.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jul 10 12:52:17.983: INFO: namespace e2e-tests-kubectl-wn6fz
Jul 10 12:52:17.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wn6fz'
Jul 10 12:52:18.244: INFO: stderr: ""
Jul 10 12:52:18.244: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jul 10 12:52:19.248: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 12:52:19.248: INFO: Found 0 / 1
Jul 10 12:52:20.347: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 12:52:20.347: INFO: Found 0 / 1
Jul 10 12:52:21.248: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 12:52:21.248: INFO: Found 0 / 1
Jul 10 12:52:22.249: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 12:52:22.249: INFO: Found 0 / 1
Jul 10 12:52:23.248: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 12:52:23.248: INFO: Found 1 / 1
Jul 10 12:52:23.248: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul 10 12:52:23.251: INFO: Selector matched 1 pods for map[app:redis]
Jul 10 12:52:23.251: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul 10 12:52:23.251: INFO: wait on redis-master startup in e2e-tests-kubectl-wn6fz 
Jul 10 12:52:23.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-hg495 redis-master --namespace=e2e-tests-kubectl-wn6fz'
Jul 10 12:52:23.357: INFO: stderr: ""
Jul 10 12:52:23.357: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 10 Jul 12:52:21.615 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 10 Jul 12:52:21.616 # Server started, Redis version 3.2.12\n1:M 10 Jul 12:52:21.616 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 10 Jul 12:52:21.616 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jul 10 12:52:23.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-wn6fz'
Jul 10 12:52:23.480: INFO: stderr: ""
Jul 10 12:52:23.480: INFO: stdout: "service/rm2 exposed\n"
Jul 10 12:52:23.492: INFO: Service rm2 in namespace e2e-tests-kubectl-wn6fz found.
STEP: exposing service
Jul 10 12:52:25.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-wn6fz'
Jul 10 12:52:25.643: INFO: stderr: ""
Jul 10 12:52:25.643: INFO: stdout: "service/rm3 exposed\n"
Jul 10 12:52:25.690: INFO: Service rm3 in namespace e2e-tests-kubectl-wn6fz found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:52:27.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wn6fz" for this suite.
Jul 10 12:52:45.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:52:45.767: INFO: namespace: e2e-tests-kubectl-wn6fz, resource: bindings, ignored listing per whitelist
Jul 10 12:52:45.820: INFO: namespace e2e-tests-kubectl-wn6fz deletion completed in 18.11843053s

• [SLOW TEST:27.931 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:52:45.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 10 12:52:45.991: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jul 10 12:52:46.006: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:52:46.007: INFO: Number of nodes with available pods: 0
Jul 10 12:52:46.007: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:52:47.012: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:52:47.016: INFO: Number of nodes with available pods: 0
Jul 10 12:52:47.016: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:52:48.033: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:52:48.036: INFO: Number of nodes with available pods: 0
Jul 10 12:52:48.036: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:52:49.013: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:52:49.017: INFO: Number of nodes with available pods: 0
Jul 10 12:52:49.017: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:52:50.012: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:52:50.015: INFO: Number of nodes with available pods: 1
Jul 10 12:52:50.015: INFO: Node hunter-worker is running more than one daemon pod
Jul 10 12:52:51.013: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:52:51.017: INFO: Number of nodes with available pods: 2
Jul 10 12:52:51.017: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jul 10 12:52:51.048: INFO: Wrong image for pod: daemon-set-2snd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:52:51.048: INFO: Wrong image for pod: daemon-set-m954s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:52:51.064: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:52:52.067: INFO: Wrong image for pod: daemon-set-2snd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:52:52.067: INFO: Wrong image for pod: daemon-set-m954s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:52:52.070: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:52:53.069: INFO: Wrong image for pod: daemon-set-2snd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:52:53.069: INFO: Wrong image for pod: daemon-set-m954s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:52:53.073: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:52:54.068: INFO: Wrong image for pod: daemon-set-2snd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:52:54.068: INFO: Wrong image for pod: daemon-set-m954s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:52:54.075: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:52:55.069: INFO: Wrong image for pod: daemon-set-2snd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:52:55.069: INFO: Wrong image for pod: daemon-set-m954s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:52:55.069: INFO: Pod daemon-set-m954s is not available
Jul 10 12:52:55.072: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:52:56.068: INFO: Wrong image for pod: daemon-set-2snd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:52:56.068: INFO: Wrong image for pod: daemon-set-m954s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:52:56.068: INFO: Pod daemon-set-m954s is not available
Jul 10 12:52:56.072: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:52:57.068: INFO: Wrong image for pod: daemon-set-2snd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:52:57.068: INFO: Wrong image for pod: daemon-set-m954s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:52:57.068: INFO: Pod daemon-set-m954s is not available
Jul 10 12:52:57.071: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:52:58.068: INFO: Wrong image for pod: daemon-set-2snd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:52:58.068: INFO: Pod daemon-set-htj97 is not available
Jul 10 12:52:58.072: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:52:59.068: INFO: Wrong image for pod: daemon-set-2snd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:52:59.068: INFO: Pod daemon-set-htj97 is not available
Jul 10 12:52:59.147: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:53:00.134: INFO: Wrong image for pod: daemon-set-2snd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:53:00.134: INFO: Pod daemon-set-htj97 is not available
Jul 10 12:53:00.137: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:53:01.068: INFO: Wrong image for pod: daemon-set-2snd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:53:01.068: INFO: Pod daemon-set-htj97 is not available
Jul 10 12:53:01.071: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:53:02.108: INFO: Wrong image for pod: daemon-set-2snd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:53:02.110: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:53:03.069: INFO: Wrong image for pod: daemon-set-2snd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:53:03.073: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:53:04.069: INFO: Wrong image for pod: daemon-set-2snd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:53:04.069: INFO: Pod daemon-set-2snd8 is not available
Jul 10 12:53:04.072: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:53:05.069: INFO: Wrong image for pod: daemon-set-2snd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:53:05.069: INFO: Pod daemon-set-2snd8 is not available
Jul 10 12:53:05.072: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:53:06.068: INFO: Wrong image for pod: daemon-set-2snd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:53:06.068: INFO: Pod daemon-set-2snd8 is not available
Jul 10 12:53:06.072: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:53:07.069: INFO: Wrong image for pod: daemon-set-2snd8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 10 12:53:07.069: INFO: Pod daemon-set-2snd8 is not available
Jul 10 12:53:07.073: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:53:08.067: INFO: Pod daemon-set-gkc4n is not available
Jul 10 12:53:08.070: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Jul 10 12:53:08.073: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:53:08.075: INFO: Number of nodes with available pods: 1
Jul 10 12:53:08.075: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 10 12:53:09.081: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:53:09.084: INFO: Number of nodes with available pods: 1
Jul 10 12:53:09.084: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 10 12:53:10.080: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:53:10.083: INFO: Number of nodes with available pods: 1
Jul 10 12:53:10.083: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 10 12:53:11.080: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 10 12:53:11.083: INFO: Number of nodes with available pods: 2
Jul 10 12:53:11.083: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-ctcdf, will wait for the garbage collector to delete the pods
Jul 10 12:53:11.154: INFO: Deleting DaemonSet.extensions daemon-set took: 5.382421ms
Jul 10 12:53:11.255: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.203313ms
Jul 10 12:53:17.679: INFO: Number of nodes with available pods: 0
Jul 10 12:53:17.679: INFO: Number of running nodes: 0, number of available pods: 0
Jul 10 12:53:17.682: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-ctcdf/daemonsets","resourceVersion":"29098"},"items":null}

Jul 10 12:53:17.709: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-ctcdf/pods","resourceVersion":"29099"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:53:17.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-ctcdf" for this suite.
Jul 10 12:53:23.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:53:23.793: INFO: namespace: e2e-tests-daemonsets-ctcdf, resource: bindings, ignored listing per whitelist
Jul 10 12:53:23.819: INFO: namespace e2e-tests-daemonsets-ctcdf deletion completed in 6.097722376s

• [SLOW TEST:37.999 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:53:23.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jul 10 12:53:23.915: INFO: Waiting up to 5m0s for pod "downward-api-570ed338-c2ac-11ea-a406-0242ac11000f" in namespace "e2e-tests-downward-api-5s9wj" to be "success or failure"
Jul 10 12:53:23.955: INFO: Pod "downward-api-570ed338-c2ac-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 39.841719ms
Jul 10 12:53:26.135: INFO: Pod "downward-api-570ed338-c2ac-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21987584s
Jul 10 12:53:28.139: INFO: Pod "downward-api-570ed338-c2ac-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.223527816s
STEP: Saw pod success
Jul 10 12:53:28.139: INFO: Pod "downward-api-570ed338-c2ac-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:53:28.142: INFO: Trying to get logs from node hunter-worker pod downward-api-570ed338-c2ac-11ea-a406-0242ac11000f container dapi-container: 
STEP: delete the pod
Jul 10 12:53:28.177: INFO: Waiting for pod downward-api-570ed338-c2ac-11ea-a406-0242ac11000f to disappear
Jul 10 12:53:28.189: INFO: Pod downward-api-570ed338-c2ac-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:53:28.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5s9wj" for this suite.
Jul 10 12:53:34.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:53:34.262: INFO: namespace: e2e-tests-downward-api-5s9wj, resource: bindings, ignored listing per whitelist
Jul 10 12:53:34.300: INFO: namespace e2e-tests-downward-api-5s9wj deletion completed in 6.108145824s

• [SLOW TEST:10.481 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:53:34.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-nmf4v/configmap-test-5d5745c9-c2ac-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume configMaps
Jul 10 12:53:34.455: INFO: Waiting up to 5m0s for pod "pod-configmaps-5d5888d9-c2ac-11ea-a406-0242ac11000f" in namespace "e2e-tests-configmap-nmf4v" to be "success or failure"
Jul 10 12:53:34.458: INFO: Pod "pod-configmaps-5d5888d9-c2ac-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.706131ms
Jul 10 12:53:36.554: INFO: Pod "pod-configmaps-5d5888d9-c2ac-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099550895s
Jul 10 12:53:38.558: INFO: Pod "pod-configmaps-5d5888d9-c2ac-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103289185s
STEP: Saw pod success
Jul 10 12:53:38.558: INFO: Pod "pod-configmaps-5d5888d9-c2ac-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:53:38.561: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-5d5888d9-c2ac-11ea-a406-0242ac11000f container env-test: 
STEP: delete the pod
Jul 10 12:53:38.639: INFO: Waiting for pod pod-configmaps-5d5888d9-c2ac-11ea-a406-0242ac11000f to disappear
Jul 10 12:53:38.653: INFO: Pod pod-configmaps-5d5888d9-c2ac-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:53:38.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-nmf4v" for this suite.
Jul 10 12:53:44.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:53:44.698: INFO: namespace: e2e-tests-configmap-nmf4v, resource: bindings, ignored listing per whitelist
Jul 10 12:53:44.727: INFO: namespace e2e-tests-configmap-nmf4v deletion completed in 6.070901082s

• [SLOW TEST:10.427 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:53:44.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 10 12:53:44.914: INFO: Waiting up to 5m0s for pod "pod-6394f919-c2ac-11ea-a406-0242ac11000f" in namespace "e2e-tests-emptydir-pd2zc" to be "success or failure"
Jul 10 12:53:44.959: INFO: Pod "pod-6394f919-c2ac-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 45.413785ms
Jul 10 12:53:46.964: INFO: Pod "pod-6394f919-c2ac-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04958264s
Jul 10 12:53:48.968: INFO: Pod "pod-6394f919-c2ac-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 4.053770822s
Jul 10 12:53:50.972: INFO: Pod "pod-6394f919-c2ac-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.057633219s
STEP: Saw pod success
Jul 10 12:53:50.972: INFO: Pod "pod-6394f919-c2ac-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:53:50.974: INFO: Trying to get logs from node hunter-worker pod pod-6394f919-c2ac-11ea-a406-0242ac11000f container test-container: 
STEP: delete the pod
Jul 10 12:53:50.993: INFO: Waiting for pod pod-6394f919-c2ac-11ea-a406-0242ac11000f to disappear
Jul 10 12:53:50.997: INFO: Pod pod-6394f919-c2ac-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:53:50.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-pd2zc" for this suite.
Jul 10 12:53:57.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:53:57.095: INFO: namespace: e2e-tests-emptydir-pd2zc, resource: bindings, ignored listing per whitelist
Jul 10 12:53:57.107: INFO: namespace e2e-tests-emptydir-pd2zc deletion completed in 6.105712708s

• [SLOW TEST:12.380 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:53:57.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 10 12:53:57.257: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6af03898-c2ac-11ea-a406-0242ac11000f" in namespace "e2e-tests-downward-api-z4jrd" to be "success or failure"
Jul 10 12:53:57.279: INFO: Pod "downwardapi-volume-6af03898-c2ac-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.187683ms
Jul 10 12:53:59.284: INFO: Pod "downwardapi-volume-6af03898-c2ac-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026489226s
Jul 10 12:54:01.288: INFO: Pod "downwardapi-volume-6af03898-c2ac-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030453826s
STEP: Saw pod success
Jul 10 12:54:01.288: INFO: Pod "downwardapi-volume-6af03898-c2ac-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:54:01.291: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-6af03898-c2ac-11ea-a406-0242ac11000f container client-container: 
STEP: delete the pod
Jul 10 12:54:01.311: INFO: Waiting for pod downwardapi-volume-6af03898-c2ac-11ea-a406-0242ac11000f to disappear
Jul 10 12:54:01.315: INFO: Pod downwardapi-volume-6af03898-c2ac-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:54:01.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-z4jrd" for this suite.
Jul 10 12:54:07.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:54:07.384: INFO: namespace: e2e-tests-downward-api-z4jrd, resource: bindings, ignored listing per whitelist
Jul 10 12:54:07.390: INFO: namespace e2e-tests-downward-api-z4jrd deletion completed in 6.07156716s

• [SLOW TEST:10.282 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:54:07.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 10 12:54:07.499: INFO: Waiting up to 5m0s for pod "downwardapi-volume-710b5fba-c2ac-11ea-a406-0242ac11000f" in namespace "e2e-tests-projected-hwxhz" to be "success or failure"
Jul 10 12:54:07.519: INFO: Pod "downwardapi-volume-710b5fba-c2ac-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.979721ms
Jul 10 12:54:09.523: INFO: Pod "downwardapi-volume-710b5fba-c2ac-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024341892s
Jul 10 12:54:11.527: INFO: Pod "downwardapi-volume-710b5fba-c2ac-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028003253s
STEP: Saw pod success
Jul 10 12:54:11.527: INFO: Pod "downwardapi-volume-710b5fba-c2ac-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:54:11.529: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-710b5fba-c2ac-11ea-a406-0242ac11000f container client-container: 
STEP: delete the pod
Jul 10 12:54:11.548: INFO: Waiting for pod downwardapi-volume-710b5fba-c2ac-11ea-a406-0242ac11000f to disappear
Jul 10 12:54:11.552: INFO: Pod downwardapi-volume-710b5fba-c2ac-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:54:11.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hwxhz" for this suite.
Jul 10 12:54:17.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:54:17.635: INFO: namespace: e2e-tests-projected-hwxhz, resource: bindings, ignored listing per whitelist
Jul 10 12:54:17.663: INFO: namespace e2e-tests-projected-hwxhz deletion completed in 6.077950559s

• [SLOW TEST:10.273 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:54:17.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-772c0da7-c2ac-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume secrets
Jul 10 12:54:17.818: INFO: Waiting up to 5m0s for pod "pod-secrets-772eab09-c2ac-11ea-a406-0242ac11000f" in namespace "e2e-tests-secrets-lsn5n" to be "success or failure"
Jul 10 12:54:17.822: INFO: Pod "pod-secrets-772eab09-c2ac-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.69899ms
Jul 10 12:54:19.908: INFO: Pod "pod-secrets-772eab09-c2ac-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090144143s
Jul 10 12:54:21.912: INFO: Pod "pod-secrets-772eab09-c2ac-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094032719s
STEP: Saw pod success
Jul 10 12:54:21.912: INFO: Pod "pod-secrets-772eab09-c2ac-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:54:21.915: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-772eab09-c2ac-11ea-a406-0242ac11000f container secret-volume-test: 
STEP: delete the pod
Jul 10 12:54:21.959: INFO: Waiting for pod pod-secrets-772eab09-c2ac-11ea-a406-0242ac11000f to disappear
Jul 10 12:54:21.974: INFO: Pod pod-secrets-772eab09-c2ac-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:54:21.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-lsn5n" for this suite.
Jul 10 12:54:27.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:54:28.008: INFO: namespace: e2e-tests-secrets-lsn5n, resource: bindings, ignored listing per whitelist
Jul 10 12:54:28.045: INFO: namespace e2e-tests-secrets-lsn5n deletion completed in 6.066613504s

• [SLOW TEST:10.381 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 10 12:54:28.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-7d574e72-c2ac-11ea-a406-0242ac11000f
STEP: Creating a pod to test consume secrets
Jul 10 12:54:28.280: INFO: Waiting up to 5m0s for pod "pod-secrets-7d6d2402-c2ac-11ea-a406-0242ac11000f" in namespace "e2e-tests-secrets-2fktb" to be "success or failure"
Jul 10 12:54:28.351: INFO: Pod "pod-secrets-7d6d2402-c2ac-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 70.64499ms
Jul 10 12:54:30.514: INFO: Pod "pod-secrets-7d6d2402-c2ac-11ea-a406-0242ac11000f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233344552s
Jul 10 12:54:32.516: INFO: Pod "pod-secrets-7d6d2402-c2ac-11ea-a406-0242ac11000f": Phase="Running", Reason="", readiness=true. Elapsed: 4.236191532s
Jul 10 12:54:34.520: INFO: Pod "pod-secrets-7d6d2402-c2ac-11ea-a406-0242ac11000f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.240001717s
STEP: Saw pod success
Jul 10 12:54:34.520: INFO: Pod "pod-secrets-7d6d2402-c2ac-11ea-a406-0242ac11000f" satisfied condition "success or failure"
Jul 10 12:54:34.522: INFO: Trying to get logs from node hunter-worker pod pod-secrets-7d6d2402-c2ac-11ea-a406-0242ac11000f container secret-volume-test: 
STEP: delete the pod
Jul 10 12:54:34.548: INFO: Waiting for pod pod-secrets-7d6d2402-c2ac-11ea-a406-0242ac11000f to disappear
Jul 10 12:54:34.553: INFO: Pod pod-secrets-7d6d2402-c2ac-11ea-a406-0242ac11000f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 10 12:54:34.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2fktb" for this suite.
Jul 10 12:54:41.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:54:41.716: INFO: namespace: e2e-tests-secrets-2fktb, resource: bindings, ignored listing per whitelist
Jul 10 12:54:41.757: INFO: namespace e2e-tests-secrets-2fktb deletion completed in 7.200501017s
STEP: Destroying namespace "e2e-tests-secret-namespace-lzbs4" for this suite.
Jul 10 12:54:48.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 10 12:54:48.149: INFO: namespace: e2e-tests-secret-namespace-lzbs4, resource: bindings, ignored listing per whitelist
Jul 10 12:54:48.240: INFO: namespace e2e-tests-secret-namespace-lzbs4 deletion completed in 6.483475667s

• [SLOW TEST:20.195 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSJul 10 12:54:48.240: INFO: Running AfterSuite actions on all nodes
Jul 10 12:54:48.240: INFO: Running AfterSuite actions on node 1
Jul 10 12:54:48.240: INFO: Skipping dumping logs from cluster

Ran 200 of 2164 Specs in 7514.760 seconds
SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS