I0707 10:47:05.467535 6 e2e.go:224] Starting e2e run "32654ca2-c03f-11ea-9ad7-0242ac11001b" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1594118824 - Will randomize all specs Will run 201 of 2164 specs Jul 7 10:47:05.644: INFO: >>> kubeConfig: /root/.kube/config Jul 7 10:47:05.646: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 7 10:47:05.660: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 7 10:47:06.149: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 7 10:47:06.149: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 7 10:47:06.149: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 7 10:47:06.157: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 7 10:47:06.157: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 7 10:47:06.157: INFO: e2e test version: v1.13.12 Jul 7 10:47:06.158: INFO: kube-apiserver version: v1.13.12 SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:47:06.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Jul 7 10:47:07.312: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-33d2df36-c03f-11ea-9ad7-0242ac11001b STEP: Creating a pod to test consume secrets Jul 7 10:47:07.534: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-33d385f9-c03f-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-d27r5" to be "success or failure" Jul 7 10:47:07.703: INFO: Pod "pod-projected-secrets-33d385f9-c03f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 169.843518ms Jul 7 10:47:09.708: INFO: Pod "pod-projected-secrets-33d385f9-c03f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174162313s Jul 7 10:47:11.712: INFO: Pod "pod-projected-secrets-33d385f9-c03f-11ea-9ad7-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.178058766s Jul 7 10:47:14.112: INFO: Pod "pod-projected-secrets-33d385f9-c03f-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.57798237s STEP: Saw pod success Jul 7 10:47:14.112: INFO: Pod "pod-projected-secrets-33d385f9-c03f-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 10:47:14.115: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-33d385f9-c03f-11ea-9ad7-0242ac11001b container projected-secret-volume-test: STEP: delete the pod Jul 7 10:47:14.555: INFO: Waiting for pod pod-projected-secrets-33d385f9-c03f-11ea-9ad7-0242ac11001b to disappear Jul 7 10:47:14.603: INFO: Pod pod-projected-secrets-33d385f9-c03f-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:47:14.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-d27r5" for this suite. Jul 7 10:47:21.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:47:21.159: INFO: namespace: e2e-tests-projected-d27r5, resource: bindings, ignored listing per whitelist Jul 7 10:47:21.204: INFO: namespace e2e-tests-projected-d27r5 deletion completed in 6.54957494s • [SLOW TEST:15.046 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:47:21.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 7 10:47:21.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-w7wn5' Jul 7 10:47:23.834: INFO: stderr: "" Jul 7 10:47:23.834: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jul 7 10:47:28.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-w7wn5 -o json' Jul 7 10:47:28.982: INFO: stderr: "" Jul 7 10:47:28.982: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-07-07T10:47:23Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-w7wn5\",\n \"resourceVersion\": \"586890\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-w7wn5/pods/e2e-test-nginx-pod\",\n \"uid\": \"3da93029-c03f-11ea-a300-0242ac110004\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-v5b6w\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-v5b6w\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-v5b6w\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-07T10:47:23Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-07T10:47:26Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-07T10:47:26Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-07T10:47:23Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://34d197dcd6c01ef8a7a860fbe72b0535aae353592c2476251cb7c7fe5a0fbb4d\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-07-07T10:47:26Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.3\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.54\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-07-07T10:47:23Z\"\n }\n}\n" STEP: replace the image in the pod Jul 7 10:47:28.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-w7wn5' Jul 7 10:47:29.857: INFO: stderr: "" Jul 7 10:47:29.857: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Jul 7 10:47:29.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-w7wn5' Jul 7 10:47:35.892: INFO: stderr: "" Jul 7 10:47:35.892: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:47:35.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-w7wn5" for this suite. Jul 7 10:47:41.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:47:41.927: INFO: namespace: e2e-tests-kubectl-w7wn5, resource: bindings, ignored listing per whitelist Jul 7 10:47:42.242: INFO: namespace e2e-tests-kubectl-w7wn5 deletion completed in 6.346484699s • [SLOW TEST:21.038 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:47:42.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 7 10:47:42.672: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:47:46.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-nct44" for this suite. Jul 7 10:48:38.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:48:38.880: INFO: namespace: e2e-tests-pods-nct44, resource: bindings, ignored listing per whitelist Jul 7 10:48:38.900: INFO: namespace e2e-tests-pods-nct44 deletion completed in 52.096745796s • [SLOW TEST:56.658 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:48:38.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 7 10:48:39.065: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6a7f0ab3-c03f-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-downward-api-qjlrb" to be "success or failure" Jul 7 10:48:39.088: INFO: Pod "downwardapi-volume-6a7f0ab3-c03f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.085142ms Jul 7 10:48:41.093: INFO: Pod "downwardapi-volume-6a7f0ab3-c03f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027516428s Jul 7 10:48:43.097: INFO: Pod "downwardapi-volume-6a7f0ab3-c03f-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03185727s STEP: Saw pod success Jul 7 10:48:43.097: INFO: Pod "downwardapi-volume-6a7f0ab3-c03f-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 10:48:43.099: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-6a7f0ab3-c03f-11ea-9ad7-0242ac11001b container client-container: STEP: delete the pod Jul 7 10:48:43.174: INFO: Waiting for pod downwardapi-volume-6a7f0ab3-c03f-11ea-9ad7-0242ac11001b to disappear Jul 7 10:48:43.207: INFO: Pod downwardapi-volume-6a7f0ab3-c03f-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:48:43.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qjlrb" for this suite. Jul 7 10:48:49.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:48:49.345: INFO: namespace: e2e-tests-downward-api-qjlrb, resource: bindings, ignored listing per whitelist Jul 7 10:48:49.419: INFO: namespace e2e-tests-downward-api-qjlrb deletion completed in 6.207958878s • [SLOW TEST:10.519 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:48:49.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on tmpfs Jul 7 10:48:49.710: INFO: Waiting up to 5m0s for pod "pod-70cea3a6-c03f-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-emptydir-lsmsv" to be "success or failure" Jul 7 10:48:49.746: INFO: Pod "pod-70cea3a6-c03f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 35.643066ms Jul 7 10:48:51.902: INFO: Pod "pod-70cea3a6-c03f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191442711s Jul 7 10:48:53.986: INFO: Pod "pod-70cea3a6-c03f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.27539169s Jul 7 10:48:55.990: INFO: Pod "pod-70cea3a6-c03f-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.279259908s STEP: Saw pod success Jul 7 10:48:55.990: INFO: Pod "pod-70cea3a6-c03f-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 10:48:55.992: INFO: Trying to get logs from node hunter-worker pod pod-70cea3a6-c03f-11ea-9ad7-0242ac11001b container test-container: STEP: delete the pod Jul 7 10:48:56.166: INFO: Waiting for pod pod-70cea3a6-c03f-11ea-9ad7-0242ac11001b to disappear Jul 7 10:48:56.168: INFO: Pod pod-70cea3a6-c03f-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:48:56.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-lsmsv" for this suite. Jul 7 10:49:02.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:49:02.209: INFO: namespace: e2e-tests-emptydir-lsmsv, resource: bindings, ignored listing per whitelist Jul 7 10:49:02.299: INFO: namespace e2e-tests-emptydir-lsmsv deletion completed in 6.125976979s • [SLOW TEST:12.880 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:49:02.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-cbc9w/configmap-test-786b3f5d-c03f-11ea-9ad7-0242ac11001b STEP: Creating a pod to test consume configMaps Jul 7 10:49:02.515: INFO: Waiting up to 5m0s for pod "pod-configmaps-786dcdf3-c03f-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-configmap-cbc9w" to be "success or failure" Jul 7 10:49:02.541: INFO: Pod "pod-configmaps-786dcdf3-c03f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 25.439684ms Jul 7 10:49:04.741: INFO: Pod "pod-configmaps-786dcdf3-c03f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225622675s Jul 7 10:49:06.745: INFO: Pod "pod-configmaps-786dcdf3-c03f-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.229057594s STEP: Saw pod success Jul 7 10:49:06.745: INFO: Pod "pod-configmaps-786dcdf3-c03f-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 10:49:06.747: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-786dcdf3-c03f-11ea-9ad7-0242ac11001b container env-test: STEP: delete the pod Jul 7 10:49:07.050: INFO: Waiting for pod pod-configmaps-786dcdf3-c03f-11ea-9ad7-0242ac11001b to disappear Jul 7 10:49:07.100: INFO: Pod pod-configmaps-786dcdf3-c03f-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:49:07.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-cbc9w" for this suite. Jul 7 10:49:13.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:49:13.177: INFO: namespace: e2e-tests-configmap-cbc9w, resource: bindings, ignored listing per whitelist Jul 7 10:49:13.207: INFO: namespace e2e-tests-configmap-cbc9w deletion completed in 6.102450544s • [SLOW TEST:10.908 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:49:13.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 7 10:49:13.447: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7efe02b5-c03f-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-downward-api-htjcq" to be "success or failure" Jul 7 10:49:13.483: INFO: Pod "downwardapi-volume-7efe02b5-c03f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 35.177351ms Jul 7 10:49:15.487: INFO: Pod "downwardapi-volume-7efe02b5-c03f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0392879s Jul 7 10:49:17.491: INFO: Pod "downwardapi-volume-7efe02b5-c03f-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043266398s STEP: Saw pod success Jul 7 10:49:17.491: INFO: Pod "downwardapi-volume-7efe02b5-c03f-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 10:49:17.494: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-7efe02b5-c03f-11ea-9ad7-0242ac11001b container client-container: STEP: delete the pod Jul 7 10:49:17.556: INFO: Waiting for pod downwardapi-volume-7efe02b5-c03f-11ea-9ad7-0242ac11001b to disappear Jul 7 10:49:17.565: INFO: Pod downwardapi-volume-7efe02b5-c03f-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:49:17.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-htjcq" for this suite. Jul 7 10:49:23.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:49:23.595: INFO: namespace: e2e-tests-downward-api-htjcq, resource: bindings, ignored listing per whitelist Jul 7 10:49:23.657: INFO: namespace e2e-tests-downward-api-htjcq deletion completed in 6.089477183s • [SLOW TEST:10.450 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:49:23.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 7 10:49:23.773: INFO: Waiting up to 5m0s for pod "pod-8527b6a4-c03f-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-emptydir-4xq9w" to be "success or failure" Jul 7 10:49:23.777: INFO: Pod "pod-8527b6a4-c03f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.448638ms Jul 7 10:49:25.780: INFO: Pod "pod-8527b6a4-c03f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006728354s Jul 7 10:49:27.784: INFO: Pod "pod-8527b6a4-c03f-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010878777s STEP: Saw pod success Jul 7 10:49:27.784: INFO: Pod "pod-8527b6a4-c03f-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 10:49:27.787: INFO: Trying to get logs from node hunter-worker pod pod-8527b6a4-c03f-11ea-9ad7-0242ac11001b container test-container: STEP: delete the pod Jul 7 10:49:27.847: INFO: Waiting for pod pod-8527b6a4-c03f-11ea-9ad7-0242ac11001b to disappear Jul 7 10:49:27.879: INFO: Pod pod-8527b6a4-c03f-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:49:27.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4xq9w" for this suite. Jul 7 10:49:33.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:49:34.010: INFO: namespace: e2e-tests-emptydir-4xq9w, resource: bindings, ignored listing per whitelist Jul 7 10:49:34.095: INFO: namespace e2e-tests-emptydir-4xq9w deletion completed in 6.210909879s • [SLOW TEST:10.437 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:49:34.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jul 7 10:49:34.190: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:49:40.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-494qc" for this suite. Jul 7 10:49:46.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:49:46.295: INFO: namespace: e2e-tests-init-container-494qc, resource: bindings, ignored listing per whitelist Jul 7 10:49:46.299: INFO: namespace e2e-tests-init-container-494qc deletion completed in 6.155658842s • [SLOW TEST:12.204 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:49:46.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 7 10:49:51.009: INFO: Successfully updated pod "pod-update-activedeadlineseconds-92aa4630-c03f-11ea-9ad7-0242ac11001b" Jul 7 10:49:51.009: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-92aa4630-c03f-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-pods-8k2vv" to be "terminated due to deadline exceeded" Jul 7 10:49:51.182: INFO: Pod "pod-update-activedeadlineseconds-92aa4630-c03f-11ea-9ad7-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 172.732832ms Jul 7 10:49:53.186: INFO: Pod "pod-update-activedeadlineseconds-92aa4630-c03f-11ea-9ad7-0242ac11001b": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.1764284s Jul 7 10:49:53.186: INFO: Pod "pod-update-activedeadlineseconds-92aa4630-c03f-11ea-9ad7-0242ac11001b" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:49:53.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-8k2vv" for this suite. Jul 7 10:49:59.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:49:59.327: INFO: namespace: e2e-tests-pods-8k2vv, resource: bindings, ignored listing per whitelist Jul 7 10:49:59.333: INFO: namespace e2e-tests-pods-8k2vv deletion completed in 6.143772892s • [SLOW TEST:13.035 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:49:59.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-9a72d654-c03f-11ea-9ad7-0242ac11001b STEP: Creating a pod to test consume configMaps Jul 7 10:49:59.507: INFO: Waiting up to 5m0s for pod "pod-configmaps-9a7348b9-c03f-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-configmap-qx75x" to be "success or failure" Jul 7 10:49:59.543: INFO: Pod "pod-configmaps-9a7348b9-c03f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 35.839283ms Jul 7 10:50:01.547: INFO: Pod "pod-configmaps-9a7348b9-c03f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039886617s Jul 7 10:50:03.551: INFO: Pod "pod-configmaps-9a7348b9-c03f-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043676284s STEP: Saw pod success Jul 7 10:50:03.551: INFO: Pod "pod-configmaps-9a7348b9-c03f-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 10:50:03.554: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-9a7348b9-c03f-11ea-9ad7-0242ac11001b container configmap-volume-test: STEP: delete the pod Jul 7 10:50:03.613: INFO: Waiting for pod pod-configmaps-9a7348b9-c03f-11ea-9ad7-0242ac11001b to disappear Jul 7 10:50:03.617: INFO: Pod pod-configmaps-9a7348b9-c03f-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:50:03.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-qx75x" for this suite. Jul 7 10:50:09.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:50:09.754: INFO: namespace: e2e-tests-configmap-qx75x, resource: bindings, ignored listing per whitelist Jul 7 10:50:09.767: INFO: namespace e2e-tests-configmap-qx75x deletion completed in 6.14609656s • [SLOW TEST:10.433 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:50:09.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0707 10:50:40.710637 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 7 10:50:40.710: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:50:40.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-5clk5" for this suite. Jul 7 10:50:47.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:50:47.411: INFO: namespace: e2e-tests-gc-5clk5, resource: bindings, ignored listing per whitelist Jul 7 10:50:47.422: INFO: namespace e2e-tests-gc-5clk5 deletion completed in 6.707236702s • [SLOW TEST:37.654 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:50:47.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-b71d4a2e-c03f-11ea-9ad7-0242ac11001b STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-b71d4a2e-c03f-11ea-9ad7-0242ac11001b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:50:55.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zssqk" for this suite. Jul 7 10:51:17.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:51:17.795: INFO: namespace: e2e-tests-projected-zssqk, resource: bindings, ignored listing per whitelist Jul 7 10:51:17.803: INFO: namespace e2e-tests-projected-zssqk deletion completed in 22.094586235s • [SLOW TEST:30.381 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:51:17.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-bxclr Jul 7 10:51:21.953: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-bxclr STEP: checking the pod's current state and verifying that restartCount is present Jul 7 10:51:21.956: INFO: Initial restart count of pod liveness-http is 0 Jul 7 10:51:42.023: INFO: Restart count of pod e2e-tests-container-probe-bxclr/liveness-http is now 1 (20.067257623s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:51:42.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-bxclr" for this suite. Jul 7 10:51:48.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:51:48.139: INFO: namespace: e2e-tests-container-probe-bxclr, resource: bindings, ignored listing per whitelist Jul 7 10:51:48.153: INFO: namespace e2e-tests-container-probe-bxclr deletion completed in 6.088995318s • [SLOW TEST:30.350 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:51:48.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-db4b3287-c03f-11ea-9ad7-0242ac11001b STEP: Creating a pod to test consume configMaps Jul 7 10:51:48.341: INFO: Waiting up to 5m0s for pod "pod-configmaps-db530ea4-c03f-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-configmap-92ggw" to be "success or failure" Jul 7 10:51:48.358: INFO: Pod "pod-configmaps-db530ea4-c03f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.416886ms Jul 7 10:51:50.376: INFO: Pod "pod-configmaps-db530ea4-c03f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035716098s Jul 7 10:51:52.381: INFO: Pod "pod-configmaps-db530ea4-c03f-11ea-9ad7-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.040462635s Jul 7 10:51:54.384: INFO: Pod "pod-configmaps-db530ea4-c03f-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.043860013s STEP: Saw pod success Jul 7 10:51:54.384: INFO: Pod "pod-configmaps-db530ea4-c03f-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 10:51:54.387: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-db530ea4-c03f-11ea-9ad7-0242ac11001b container configmap-volume-test: STEP: delete the pod Jul 7 10:51:54.403: INFO: Waiting for pod pod-configmaps-db530ea4-c03f-11ea-9ad7-0242ac11001b to disappear Jul 7 10:51:54.408: INFO: Pod pod-configmaps-db530ea4-c03f-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:51:54.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-92ggw" for this suite. Jul 7 10:52:00.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:52:00.508: INFO: namespace: e2e-tests-configmap-92ggw, resource: bindings, ignored listing per whitelist Jul 7 10:52:00.510: INFO: namespace e2e-tests-configmap-92ggw deletion completed in 6.098447478s • [SLOW TEST:12.356 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:52:00.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Jul 7 10:52:00.626: INFO: Waiting up to 5m0s for pod "var-expansion-e2a4676f-c03f-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-var-expansion-lwm7n" to be "success or failure" Jul 7 10:52:00.630: INFO: Pod "var-expansion-e2a4676f-c03f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.957311ms Jul 7 10:52:02.634: INFO: Pod "var-expansion-e2a4676f-c03f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007540775s Jul 7 10:52:04.638: INFO: Pod "var-expansion-e2a4676f-c03f-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011743292s STEP: Saw pod success Jul 7 10:52:04.638: INFO: Pod "var-expansion-e2a4676f-c03f-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 10:52:04.641: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-e2a4676f-c03f-11ea-9ad7-0242ac11001b container dapi-container: STEP: delete the pod Jul 7 10:52:04.803: INFO: Waiting for pod var-expansion-e2a4676f-c03f-11ea-9ad7-0242ac11001b to disappear Jul 7 10:52:04.849: INFO: Pod var-expansion-e2a4676f-c03f-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:52:04.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-lwm7n" for this suite. Jul 7 10:52:12.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:52:12.959: INFO: namespace: e2e-tests-var-expansion-lwm7n, resource: bindings, ignored listing per whitelist Jul 7 10:52:12.992: INFO: namespace e2e-tests-var-expansion-lwm7n deletion completed in 8.130583469s • [SLOW TEST:12.483 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:52:12.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-8q8qt [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-8q8qt STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-8q8qt STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-8q8qt STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-8q8qt Jul 7 10:52:17.553: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-8q8qt, name: ss-0, uid: ec39d5bb-c03f-11ea-a300-0242ac110004, status phase: Pending. Waiting for statefulset controller to delete. Jul 7 10:52:17.986: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-8q8qt, name: ss-0, uid: ec39d5bb-c03f-11ea-a300-0242ac110004, status phase: Failed. Waiting for statefulset controller to delete. Jul 7 10:52:18.573: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-8q8qt, name: ss-0, uid: ec39d5bb-c03f-11ea-a300-0242ac110004, status phase: Failed. Waiting for statefulset controller to delete. Jul 7 10:52:19.084: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-8q8qt STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-8q8qt STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-8q8qt and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jul 7 10:52:28.684: INFO: Deleting all statefulset in ns e2e-tests-statefulset-8q8qt Jul 7 10:52:28.688: INFO: Scaling statefulset ss to 0 Jul 7 10:52:38.994: INFO: Waiting for statefulset status.replicas updated to 0 Jul 7 10:52:38.997: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:52:39.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-8q8qt" for this suite. Jul 7 10:52:55.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:52:55.743: INFO: namespace: e2e-tests-statefulset-8q8qt, resource: bindings, ignored listing per whitelist Jul 7 10:52:55.799: INFO: namespace e2e-tests-statefulset-8q8qt deletion completed in 16.491370555s • [SLOW TEST:42.806 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:52:55.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 7 10:52:56.362: INFO: Waiting up to 5m0s for pod "downwardapi-volume-03b86d03-c040-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-tz7w4" to be "success or failure" Jul 7 10:52:56.402: INFO: Pod "downwardapi-volume-03b86d03-c040-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 40.612244ms Jul 7 10:52:58.551: INFO: Pod "downwardapi-volume-03b86d03-c040-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18911543s Jul 7 10:53:00.616: INFO: Pod "downwardapi-volume-03b86d03-c040-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.254296107s Jul 7 10:53:02.833: INFO: Pod "downwardapi-volume-03b86d03-c040-11ea-9ad7-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 6.471346145s Jul 7 10:53:05.132: INFO: Pod "downwardapi-volume-03b86d03-c040-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.769793501s STEP: Saw pod success Jul 7 10:53:05.132: INFO: Pod "downwardapi-volume-03b86d03-c040-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 10:53:05.134: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-03b86d03-c040-11ea-9ad7-0242ac11001b container client-container: STEP: delete the pod Jul 7 10:53:05.433: INFO: Waiting for pod downwardapi-volume-03b86d03-c040-11ea-9ad7-0242ac11001b to disappear Jul 7 10:53:05.934: INFO: Pod downwardapi-volume-03b86d03-c040-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:53:05.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tz7w4" for this suite. Jul 7 10:53:12.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:53:12.052: INFO: namespace: e2e-tests-projected-tz7w4, resource: bindings, ignored listing per whitelist Jul 7 10:53:12.098: INFO: namespace e2e-tests-projected-tz7w4 deletion completed in 6.158313032s • [SLOW TEST:16.298 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:53:12.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-0d52e6bd-c040-11ea-9ad7-0242ac11001b STEP: Creating secret with name s-test-opt-upd-0d52e723-c040-11ea-9ad7-0242ac11001b STEP: Creating the pod STEP: Deleting secret s-test-opt-del-0d52e6bd-c040-11ea-9ad7-0242ac11001b STEP: Updating secret s-test-opt-upd-0d52e723-c040-11ea-9ad7-0242ac11001b STEP: Creating secret with name s-test-opt-create-0d52e748-c040-11ea-9ad7-0242ac11001b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:54:31.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tckrx" for this suite. Jul 7 10:54:53.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:54:53.576: INFO: namespace: e2e-tests-projected-tckrx, resource: bindings, ignored listing per whitelist Jul 7 10:54:53.599: INFO: namespace e2e-tests-projected-tckrx deletion completed in 22.16124892s • [SLOW TEST:101.501 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:54:53.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jul 7 10:54:53.708: INFO: PodSpec: initContainers in spec.initContainers Jul 7 10:55:44.109: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-49d0cda2-c040-11ea-9ad7-0242ac11001b", GenerateName:"", Namespace:"e2e-tests-init-container-bl5j4", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-bl5j4/pods/pod-init-49d0cda2-c040-11ea-9ad7-0242ac11001b", UID:"49d252db-c040-11ea-a300-0242ac110004", ResourceVersion:"589248", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729716093, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"708234127", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-jbgch", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00168a440), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jbgch", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jbgch", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jbgch", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000b2a718), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001b66060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000b2a7a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000b2a7c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000b2a7c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000b2a7cc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729716093, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729716093, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729716093, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729716093, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.2.73", StartTime:(*v1.Time)(0xc000d000e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0007d1f10)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0007d1f80)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://4c4a9b20980933d6dd6267bd27dc640b84d8fec4145bb80a7acc0fc897ba14bd"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000d00120), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000d00100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:55:44.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-bl5j4" for this suite. Jul 7 10:56:06.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:56:06.466: INFO: namespace: e2e-tests-init-container-bl5j4, resource: bindings, ignored listing per whitelist Jul 7 10:56:06.473: INFO: namespace e2e-tests-init-container-bl5j4 deletion completed in 22.275228997s • [SLOW TEST:72.874 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:56:06.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-753f027a-c040-11ea-9ad7-0242ac11001b STEP: Creating a pod to test consume configMaps Jul 7 10:56:06.603: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-753fd45e-c040-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-hnmgd" to be "success or failure" Jul 7 10:56:06.642: INFO: Pod "pod-projected-configmaps-753fd45e-c040-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 38.544602ms Jul 7 10:56:08.647: INFO: Pod "pod-projected-configmaps-753fd45e-c040-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043389026s Jul 7 10:56:10.690: INFO: Pod "pod-projected-configmaps-753fd45e-c040-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086477296s Jul 7 10:56:12.695: INFO: Pod "pod-projected-configmaps-753fd45e-c040-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.091411063s STEP: Saw pod success Jul 7 10:56:12.695: INFO: Pod "pod-projected-configmaps-753fd45e-c040-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 10:56:12.698: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-753fd45e-c040-11ea-9ad7-0242ac11001b container projected-configmap-volume-test: STEP: delete the pod Jul 7 10:56:12.726: INFO: Waiting for pod pod-projected-configmaps-753fd45e-c040-11ea-9ad7-0242ac11001b to disappear Jul 7 10:56:12.754: INFO: Pod pod-projected-configmaps-753fd45e-c040-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:56:12.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-hnmgd" for this suite. Jul 7 10:56:20.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:56:20.847: INFO: namespace: e2e-tests-projected-hnmgd, resource: bindings, ignored listing per whitelist Jul 7 10:56:20.856: INFO: namespace e2e-tests-projected-hnmgd deletion completed in 8.098818715s • [SLOW TEST:14.383 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:56:20.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jul 7 10:56:21.062: INFO: namespace e2e-tests-kubectl-8hlbm Jul 7 10:56:21.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-8hlbm' Jul 7 10:56:21.471: INFO: stderr: "" Jul 7 10:56:21.471: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jul 7 10:56:22.504: INFO: Selector matched 1 pods for map[app:redis] Jul 7 10:56:22.504: INFO: Found 0 / 1 Jul 7 10:56:23.477: INFO: Selector matched 1 pods for map[app:redis] Jul 7 10:56:23.477: INFO: Found 0 / 1 Jul 7 10:56:24.582: INFO: Selector matched 1 pods for map[app:redis] Jul 7 10:56:24.583: INFO: Found 0 / 1 Jul 7 10:56:25.547: INFO: Selector matched 1 pods for map[app:redis] Jul 7 10:56:25.547: INFO: Found 1 / 1 Jul 7 10:56:25.547: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 7 10:56:25.550: INFO: Selector matched 1 pods for map[app:redis] Jul 7 10:56:25.550: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 7 10:56:25.550: INFO: wait on redis-master startup in e2e-tests-kubectl-8hlbm Jul 7 10:56:25.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-xzmhj redis-master --namespace=e2e-tests-kubectl-8hlbm' Jul 7 10:56:25.722: INFO: stderr: "" Jul 7 10:56:25.723: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 07 Jul 10:56:25.048 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Jul 10:56:25.048 # Server started, Redis version 3.2.12\n1:M 07 Jul 10:56:25.049 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Jul 10:56:25.049 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jul 7 10:56:25.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-8hlbm' Jul 7 10:56:26.010: INFO: stderr: "" Jul 7 10:56:26.010: INFO: stdout: "service/rm2 exposed\n" Jul 7 10:56:26.061: INFO: Service rm2 in namespace e2e-tests-kubectl-8hlbm found. STEP: exposing service Jul 7 10:56:28.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-8hlbm' Jul 7 10:56:28.301: INFO: stderr: "" Jul 7 10:56:28.301: INFO: stdout: "service/rm3 exposed\n" Jul 7 10:56:28.307: INFO: Service rm3 in namespace e2e-tests-kubectl-8hlbm found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:56:30.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-8hlbm" for this suite. Jul 7 10:56:54.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:56:54.389: INFO: namespace: e2e-tests-kubectl-8hlbm, resource: bindings, ignored listing per whitelist Jul 7 10:56:54.398: INFO: namespace e2e-tests-kubectl-8hlbm deletion completed in 24.079594199s • [SLOW TEST:33.541 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:56:54.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-mmlhs Jul 7 10:56:58.566: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-mmlhs STEP: checking the pod's current state and verifying that restartCount is present Jul 7 10:56:58.569: INFO: Initial restart count of pod liveness-http is 0 Jul 7 10:57:14.605: INFO: Restart count of pod e2e-tests-container-probe-mmlhs/liveness-http is now 1 (16.036345421s elapsed) Jul 7 10:57:34.683: INFO: Restart count of pod e2e-tests-container-probe-mmlhs/liveness-http is now 2 (36.114179256s elapsed) Jul 7 10:57:54.745: INFO: Restart count of pod e2e-tests-container-probe-mmlhs/liveness-http is now 3 (56.176134218s elapsed) Jul 7 10:58:14.787: INFO: Restart count of pod e2e-tests-container-probe-mmlhs/liveness-http is now 4 (1m16.218325035s elapsed) Jul 7 10:59:27.582: INFO: Restart count of pod e2e-tests-container-probe-mmlhs/liveness-http is now 5 (2m29.012968348s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:59:27.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-mmlhs" for this suite. Jul 7 10:59:33.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:59:33.708: INFO: namespace: e2e-tests-container-probe-mmlhs, resource: bindings, ignored listing per whitelist Jul 7 10:59:33.714: INFO: namespace e2e-tests-container-probe-mmlhs deletion completed in 6.105725459s • [SLOW TEST:159.317 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should have monotonically increasing restart count [Slow][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:59:33.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 7 10:59:33.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-rtgh4' Jul 7 10:59:36.290: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 7 10:59:36.290: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jul 7 10:59:38.340: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-bkv9x] Jul 7 10:59:38.340: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-bkv9x" in namespace "e2e-tests-kubectl-rtgh4" to be "running and ready" Jul 7 10:59:38.344: INFO: Pod "e2e-test-nginx-rc-bkv9x": Phase="Pending", Reason="", readiness=false. Elapsed: 3.523922ms Jul 7 10:59:40.348: INFO: Pod "e2e-test-nginx-rc-bkv9x": Phase="Running", Reason="", readiness=true. Elapsed: 2.007874076s Jul 7 10:59:40.348: INFO: Pod "e2e-test-nginx-rc-bkv9x" satisfied condition "running and ready" Jul 7 10:59:40.348: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-bkv9x] Jul 7 10:59:40.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-rtgh4' Jul 7 10:59:40.486: INFO: stderr: "" Jul 7 10:59:40.486: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Jul 7 10:59:40.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-rtgh4' Jul 7 10:59:40.601: INFO: stderr: "" Jul 7 10:59:40.601: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:59:40.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-rtgh4" for this suite. Jul 7 10:59:46.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 10:59:46.985: INFO: namespace: e2e-tests-kubectl-rtgh4, resource: bindings, ignored listing per whitelist Jul 7 10:59:47.001: INFO: namespace e2e-tests-kubectl-rtgh4 deletion completed in 6.397033252s • [SLOW TEST:13.287 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 10:59:47.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on node default medium Jul 7 10:59:47.119: INFO: Waiting up to 5m0s for pod "pod-f8b2f89e-c040-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-emptydir-4jmfh" to be "success or failure" Jul 7 10:59:47.139: INFO: Pod "pod-f8b2f89e-c040-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.785625ms Jul 7 10:59:49.322: INFO: Pod "pod-f8b2f89e-c040-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202666731s Jul 7 10:59:51.326: INFO: Pod "pod-f8b2f89e-c040-11ea-9ad7-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.207015487s Jul 7 10:59:53.331: INFO: Pod "pod-f8b2f89e-c040-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.211900333s STEP: Saw pod success Jul 7 10:59:53.331: INFO: Pod "pod-f8b2f89e-c040-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 10:59:53.334: INFO: Trying to get logs from node hunter-worker pod pod-f8b2f89e-c040-11ea-9ad7-0242ac11001b container test-container: STEP: delete the pod Jul 7 10:59:53.921: INFO: Waiting for pod pod-f8b2f89e-c040-11ea-9ad7-0242ac11001b to disappear Jul 7 10:59:54.177: INFO: Pod pod-f8b2f89e-c040-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 10:59:54.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-4jmfh" for this suite. Jul 7 11:00:00.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:00:00.373: INFO: namespace: e2e-tests-emptydir-4jmfh, resource: bindings, ignored listing per whitelist Jul 7 11:00:00.415: INFO: namespace e2e-tests-emptydir-4jmfh deletion completed in 6.233650654s • [SLOW TEST:13.413 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:00:00.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Jul 7 11:00:01.062: INFO: created pod pod-service-account-defaultsa Jul 7 11:00:01.062: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jul 7 11:00:01.112: INFO: created pod pod-service-account-mountsa Jul 7 11:00:01.113: INFO: pod pod-service-account-mountsa service account token volume mount: true Jul 7 11:00:01.369: INFO: created pod pod-service-account-nomountsa Jul 7 11:00:01.369: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jul 7 11:00:01.382: INFO: created pod pod-service-account-defaultsa-mountspec Jul 7 11:00:01.382: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jul 7 11:00:01.415: INFO: created pod pod-service-account-mountsa-mountspec Jul 7 11:00:01.415: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jul 7 11:00:01.460: INFO: created pod pod-service-account-nomountsa-mountspec Jul 7 11:00:01.460: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jul 7 11:00:01.531: INFO: created pod pod-service-account-defaultsa-nomountspec Jul 7 11:00:01.531: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jul 7 11:00:01.540: INFO: created pod pod-service-account-mountsa-nomountspec Jul 7 11:00:01.541: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jul 7 11:00:01.584: INFO: created pod pod-service-account-nomountsa-nomountspec Jul 7 11:00:01.584: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:00:01.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-7d5rj" for this suite. Jul 7 11:00:35.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:00:35.927: INFO: namespace: e2e-tests-svcaccounts-7d5rj, resource: bindings, ignored listing per whitelist Jul 7 11:00:35.966: INFO: namespace e2e-tests-svcaccounts-7d5rj deletion completed in 34.294793427s • [SLOW TEST:35.551 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:00:35.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-15e9347e-c041-11ea-9ad7-0242ac11001b STEP: Creating a pod to test consume secrets Jul 7 11:00:36.175: INFO: Waiting up to 5m0s for pod "pod-secrets-15ed31be-c041-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-secrets-v4xdd" to be "success or failure" Jul 7 11:00:36.179: INFO: Pod "pod-secrets-15ed31be-c041-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.748218ms Jul 7 11:00:38.183: INFO: Pod "pod-secrets-15ed31be-c041-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008096237s Jul 7 11:00:40.187: INFO: Pod "pod-secrets-15ed31be-c041-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011953062s Jul 7 11:00:42.192: INFO: Pod "pod-secrets-15ed31be-c041-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016605185s STEP: Saw pod success Jul 7 11:00:42.192: INFO: Pod "pod-secrets-15ed31be-c041-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 11:00:42.195: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-15ed31be-c041-11ea-9ad7-0242ac11001b container secret-volume-test: STEP: delete the pod Jul 7 11:00:42.222: INFO: Waiting for pod pod-secrets-15ed31be-c041-11ea-9ad7-0242ac11001b to disappear Jul 7 11:00:42.250: INFO: Pod pod-secrets-15ed31be-c041-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:00:42.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-v4xdd" for this suite. Jul 7 11:00:48.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:00:48.323: INFO: namespace: e2e-tests-secrets-v4xdd, resource: bindings, ignored listing per whitelist Jul 7 11:00:48.334: INFO: namespace e2e-tests-secrets-v4xdd deletion completed in 6.080316562s • [SLOW TEST:12.368 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:00:48.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-1d403daa-c041-11ea-9ad7-0242ac11001b STEP: Creating a pod to test consume configMaps Jul 7 11:00:48.504: INFO: Waiting up to 5m0s for pod "pod-configmaps-1d40ee12-c041-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-configmap-dwn67" to be "success or failure" Jul 7 11:00:48.520: INFO: Pod "pod-configmaps-1d40ee12-c041-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.66331ms Jul 7 11:00:50.524: INFO: Pod "pod-configmaps-1d40ee12-c041-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02075077s Jul 7 11:00:52.529: INFO: Pod "pod-configmaps-1d40ee12-c041-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025164836s STEP: Saw pod success Jul 7 11:00:52.529: INFO: Pod "pod-configmaps-1d40ee12-c041-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 11:00:52.531: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-1d40ee12-c041-11ea-9ad7-0242ac11001b container configmap-volume-test: STEP: delete the pod Jul 7 11:00:52.684: INFO: Waiting for pod pod-configmaps-1d40ee12-c041-11ea-9ad7-0242ac11001b to disappear Jul 7 11:00:52.730: INFO: Pod pod-configmaps-1d40ee12-c041-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:00:52.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-dwn67" for this suite. Jul 7 11:00:58.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:00:58.856: INFO: namespace: e2e-tests-configmap-dwn67, resource: bindings, ignored listing per whitelist Jul 7 11:00:58.901: INFO: namespace e2e-tests-configmap-dwn67 deletion completed in 6.112711709s • [SLOW TEST:10.567 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:00:58.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 7 11:00:59.081: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23936273-c041-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-cf99d" to be "success or failure" Jul 7 11:00:59.123: INFO: Pod "downwardapi-volume-23936273-c041-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 42.017003ms Jul 7 11:01:01.128: INFO: Pod "downwardapi-volume-23936273-c041-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046379407s Jul 7 11:01:03.131: INFO: Pod "downwardapi-volume-23936273-c041-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049890522s Jul 7 11:01:05.148: INFO: Pod "downwardapi-volume-23936273-c041-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.066247802s STEP: Saw pod success Jul 7 11:01:05.148: INFO: Pod "downwardapi-volume-23936273-c041-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 11:01:05.151: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-23936273-c041-11ea-9ad7-0242ac11001b container client-container: STEP: delete the pod Jul 7 11:01:05.195: INFO: Waiting for pod downwardapi-volume-23936273-c041-11ea-9ad7-0242ac11001b to disappear Jul 7 11:01:05.215: INFO: Pod downwardapi-volume-23936273-c041-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:01:05.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-cf99d" for this suite. Jul 7 11:01:11.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:01:11.252: INFO: namespace: e2e-tests-projected-cf99d, resource: bindings, ignored listing per whitelist Jul 7 11:01:11.348: INFO: namespace e2e-tests-projected-cf99d deletion completed in 6.130120561s • [SLOW TEST:12.447 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:01:11.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-s2q76 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 7 11:01:11.439: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 7 11:01:41.638: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.88:8080/dial?request=hostName&protocol=http&host=10.244.1.226&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-s2q76 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 7 11:01:41.638: INFO: >>> kubeConfig: /root/.kube/config I0707 11:01:41.668026 6 log.go:172] (0xc0019a0000) (0xc001a1e0a0) Create stream I0707 11:01:41.668062 6 log.go:172] (0xc0019a0000) (0xc001a1e0a0) Stream added, broadcasting: 1 I0707 11:01:41.670695 6 log.go:172] (0xc0019a0000) Reply frame received for 1 I0707 11:01:41.670739 6 log.go:172] (0xc0019a0000) (0xc00041dc20) Create stream I0707 11:01:41.670750 6 log.go:172] (0xc0019a0000) (0xc00041dc20) Stream added, broadcasting: 3 I0707 11:01:41.671557 6 log.go:172] (0xc0019a0000) Reply frame received for 3 I0707 11:01:41.671599 6 log.go:172] (0xc0019a0000) (0xc001ef1cc0) Create stream I0707 11:01:41.671613 6 log.go:172] (0xc0019a0000) (0xc001ef1cc0) Stream added, broadcasting: 5 I0707 11:01:41.672353 6 log.go:172] (0xc0019a0000) Reply frame received for 5 I0707 11:01:41.810419 6 log.go:172] (0xc0019a0000) Data frame received for 3 I0707 11:01:41.810466 6 log.go:172] (0xc00041dc20) (3) Data frame handling I0707 11:01:41.810491 6 log.go:172] (0xc00041dc20) (3) Data frame sent I0707 11:01:41.811085 6 log.go:172] (0xc0019a0000) Data frame received for 5 I0707 11:01:41.811123 6 log.go:172] (0xc001ef1cc0) (5) Data frame handling I0707 11:01:41.811146 6 log.go:172] (0xc0019a0000) Data frame received for 3 I0707 11:01:41.811158 6 log.go:172] (0xc00041dc20) (3) Data frame handling I0707 11:01:41.813052 6 log.go:172] (0xc0019a0000) Data frame received for 1 I0707 11:01:41.813089 6 log.go:172] (0xc001a1e0a0) (1) Data frame handling I0707 11:01:41.813320 6 log.go:172] (0xc001a1e0a0) (1) Data frame sent I0707 11:01:41.813389 6 log.go:172] (0xc0019a0000) (0xc001a1e0a0) Stream removed, broadcasting: 1 I0707 11:01:41.813432 6 log.go:172] (0xc0019a0000) Go away received I0707 11:01:41.813679 6 log.go:172] (0xc0019a0000) (0xc001a1e0a0) Stream removed, broadcasting: 1 I0707 11:01:41.813708 6 log.go:172] (0xc0019a0000) (0xc00041dc20) Stream removed, broadcasting: 3 I0707 11:01:41.813722 6 log.go:172] (0xc0019a0000) (0xc001ef1cc0) Stream removed, broadcasting: 5 Jul 7 11:01:41.813: INFO: Waiting for endpoints: map[] Jul 7 11:01:41.860: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.88:8080/dial?request=hostName&protocol=http&host=10.244.2.87&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-s2q76 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 7 11:01:41.860: INFO: >>> kubeConfig: /root/.kube/config I0707 11:01:41.889491 6 log.go:172] (0xc0024182c0) (0xc002313220) Create stream I0707 11:01:41.889527 6 log.go:172] (0xc0024182c0) (0xc002313220) Stream added, broadcasting: 1 I0707 11:01:41.892171 6 log.go:172] (0xc0024182c0) Reply frame received for 1 I0707 11:01:41.892205 6 log.go:172] (0xc0024182c0) (0xc00041dd60) Create stream I0707 11:01:41.892216 6 log.go:172] (0xc0024182c0) (0xc00041dd60) Stream added, broadcasting: 3 I0707 11:01:41.893013 6 log.go:172] (0xc0024182c0) Reply frame received for 3 I0707 11:01:41.893036 6 log.go:172] (0xc0024182c0) (0xc001a1e320) Create stream I0707 11:01:41.893047 6 log.go:172] (0xc0024182c0) (0xc001a1e320) Stream added, broadcasting: 5 I0707 11:01:41.894243 6 log.go:172] (0xc0024182c0) Reply frame received for 5 I0707 11:01:41.952060 6 log.go:172] (0xc0024182c0) Data frame received for 3 I0707 11:01:41.952106 6 log.go:172] (0xc00041dd60) (3) Data frame handling I0707 11:01:41.952154 6 log.go:172] (0xc00041dd60) (3) Data frame sent I0707 11:01:41.952997 6 log.go:172] (0xc0024182c0) Data frame received for 5 I0707 11:01:41.953022 6 log.go:172] (0xc001a1e320) (5) Data frame handling I0707 11:01:41.953237 6 log.go:172] (0xc0024182c0) Data frame received for 3 I0707 11:01:41.953431 6 log.go:172] (0xc00041dd60) (3) Data frame handling I0707 11:01:41.955012 6 log.go:172] (0xc0024182c0) Data frame received for 1 I0707 11:01:41.955042 6 log.go:172] (0xc002313220) (1) Data frame handling I0707 11:01:41.955081 6 log.go:172] (0xc002313220) (1) Data frame sent I0707 11:01:41.955127 6 log.go:172] (0xc0024182c0) (0xc002313220) Stream removed, broadcasting: 1 I0707 11:01:41.955197 6 log.go:172] (0xc0024182c0) Go away received I0707 11:01:41.955253 6 log.go:172] (0xc0024182c0) (0xc002313220) Stream removed, broadcasting: 1 I0707 11:01:41.955282 6 log.go:172] (0xc0024182c0) (0xc00041dd60) Stream removed, broadcasting: 3 I0707 11:01:41.955303 6 log.go:172] (0xc0024182c0) (0xc001a1e320) Stream removed, broadcasting: 5 Jul 7 11:01:41.955: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:01:41.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-s2q76" for this suite. Jul 7 11:02:04.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:02:04.100: INFO: namespace: e2e-tests-pod-network-test-s2q76, resource: bindings, ignored listing per whitelist Jul 7 11:02:04.128: INFO: namespace e2e-tests-pod-network-test-s2q76 deletion completed in 22.168292729s • [SLOW TEST:52.779 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:02:04.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:02:08.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-vltxc" for this suite. Jul 7 11:03:00.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:03:00.450: INFO: namespace: e2e-tests-kubelet-test-vltxc, resource: bindings, ignored listing per whitelist Jul 7 11:03:00.486: INFO: namespace e2e-tests-kubelet-test-vltxc deletion completed in 52.223484748s • [SLOW TEST:56.358 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:03:00.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 7 11:03:00.836: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6c294225-c041-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-8tvg6" to be "success or failure" Jul 7 11:03:00.840: INFO: Pod "downwardapi-volume-6c294225-c041-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.839688ms Jul 7 11:03:02.845: INFO: Pod "downwardapi-volume-6c294225-c041-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008685406s Jul 7 11:03:04.849: INFO: Pod "downwardapi-volume-6c294225-c041-11ea-9ad7-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.012399967s Jul 7 11:03:06.867: INFO: Pod "downwardapi-volume-6c294225-c041-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031068931s STEP: Saw pod success Jul 7 11:03:06.867: INFO: Pod "downwardapi-volume-6c294225-c041-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 11:03:06.870: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-6c294225-c041-11ea-9ad7-0242ac11001b container client-container: STEP: delete the pod Jul 7 11:03:07.055: INFO: Waiting for pod downwardapi-volume-6c294225-c041-11ea-9ad7-0242ac11001b to disappear Jul 7 11:03:07.088: INFO: Pod downwardapi-volume-6c294225-c041-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:03:07.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8tvg6" for this suite. Jul 7 11:03:13.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:03:13.208: INFO: namespace: e2e-tests-projected-8tvg6, resource: bindings, ignored listing per whitelist Jul 7 11:03:13.272: INFO: namespace e2e-tests-projected-8tvg6 deletion completed in 6.179884454s • [SLOW TEST:12.785 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:03:13.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-map-73a4ae3f-c041-11ea-9ad7-0242ac11001b STEP: Creating a pod to test consume configMaps Jul 7 11:03:13.402: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-73a6e18c-c041-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-gfrqj" to be "success or failure" Jul 7 11:03:13.432: INFO: Pod "pod-projected-configmaps-73a6e18c-c041-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.239762ms Jul 7 11:03:15.438: INFO: Pod "pod-projected-configmaps-73a6e18c-c041-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035621591s Jul 7 11:03:17.454: INFO: Pod "pod-projected-configmaps-73a6e18c-c041-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051982436s STEP: Saw pod success Jul 7 11:03:17.454: INFO: Pod "pod-projected-configmaps-73a6e18c-c041-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 11:03:17.456: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-73a6e18c-c041-11ea-9ad7-0242ac11001b container projected-configmap-volume-test: STEP: delete the pod Jul 7 11:03:17.531: INFO: Waiting for pod pod-projected-configmaps-73a6e18c-c041-11ea-9ad7-0242ac11001b to disappear Jul 7 11:03:17.640: INFO: Pod pod-projected-configmaps-73a6e18c-c041-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:03:17.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gfrqj" for this suite. Jul 7 11:03:23.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:03:23.850: INFO: namespace: e2e-tests-projected-gfrqj, resource: bindings, ignored listing per whitelist Jul 7 11:03:23.855: INFO: namespace e2e-tests-projected-gfrqj deletion completed in 6.210739539s • [SLOW TEST:10.583 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:03:23.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:03:23.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-6nc44" for this suite. Jul 7 11:03:46.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:03:46.149: INFO: namespace: e2e-tests-pods-6nc44, resource: bindings, ignored listing per whitelist Jul 7 11:03:46.151: INFO: namespace e2e-tests-pods-6nc44 deletion completed in 22.141427105s • [SLOW TEST:22.295 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:03:46.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir volume type on tmpfs Jul 7 11:03:46.276: INFO: Waiting up to 5m0s for pod "pod-873f662d-c041-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-emptydir-sp9x5" to be "success or failure" Jul 7 11:03:46.285: INFO: Pod "pod-873f662d-c041-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.062678ms Jul 7 11:03:48.290: INFO: Pod "pod-873f662d-c041-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014055051s Jul 7 11:03:50.294: INFO: Pod "pod-873f662d-c041-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01849093s STEP: Saw pod success Jul 7 11:03:50.294: INFO: Pod "pod-873f662d-c041-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 11:03:50.297: INFO: Trying to get logs from node hunter-worker pod pod-873f662d-c041-11ea-9ad7-0242ac11001b container test-container: STEP: delete the pod Jul 7 11:03:50.316: INFO: Waiting for pod pod-873f662d-c041-11ea-9ad7-0242ac11001b to disappear Jul 7 11:03:50.370: INFO: Pod pod-873f662d-c041-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:03:50.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-sp9x5" for this suite. Jul 7 11:03:56.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:03:56.408: INFO: namespace: e2e-tests-emptydir-sp9x5, resource: bindings, ignored listing per whitelist Jul 7 11:03:56.469: INFO: namespace e2e-tests-emptydir-sp9x5 deletion completed in 6.093875597s • [SLOW TEST:10.318 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:03:56.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jul 7 11:03:56.629: INFO: Waiting up to 5m0s for pod "downward-api-8d6911d2-c041-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-downward-api-5rhhp" to be "success or failure" Jul 7 11:03:56.730: INFO: Pod "downward-api-8d6911d2-c041-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 101.392237ms Jul 7 11:03:58.734: INFO: Pod "downward-api-8d6911d2-c041-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105279209s Jul 7 11:04:00.743: INFO: Pod "downward-api-8d6911d2-c041-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.113963763s STEP: Saw pod success Jul 7 11:04:00.743: INFO: Pod "downward-api-8d6911d2-c041-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 11:04:00.745: INFO: Trying to get logs from node hunter-worker pod downward-api-8d6911d2-c041-11ea-9ad7-0242ac11001b container dapi-container: STEP: delete the pod Jul 7 11:04:00.942: INFO: Waiting for pod downward-api-8d6911d2-c041-11ea-9ad7-0242ac11001b to disappear Jul 7 11:04:01.233: INFO: Pod downward-api-8d6911d2-c041-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:04:01.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-5rhhp" for this suite. Jul 7 11:04:07.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:04:07.507: INFO: namespace: e2e-tests-downward-api-5rhhp, resource: bindings, ignored listing per whitelist Jul 7 11:04:07.511: INFO: namespace e2e-tests-downward-api-5rhhp deletion completed in 6.273050061s • [SLOW TEST:11.042 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:04:07.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 7 11:04:07.648: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:04:11.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-nl96d" for this suite. Jul 7 11:04:51.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:04:51.868: INFO: namespace: e2e-tests-pods-nl96d, resource: bindings, ignored listing per whitelist Jul 7 11:04:51.884: INFO: namespace e2e-tests-pods-nl96d deletion completed in 40.087844776s • [SLOW TEST:44.373 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:04:51.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jul 7 11:04:59.246: INFO: 3 pods remaining Jul 7 11:04:59.246: INFO: 0 pods has nil DeletionTimestamp Jul 7 11:04:59.246: INFO: Jul 7 11:05:00.129: INFO: 0 pods remaining Jul 7 11:05:00.129: INFO: 0 pods has nil DeletionTimestamp Jul 7 11:05:00.129: INFO: Jul 7 11:05:00.880: INFO: 0 pods remaining Jul 7 11:05:00.880: INFO: 0 pods has nil DeletionTimestamp Jul 7 11:05:00.880: INFO: STEP: Gathering metrics W0707 11:05:02.110549 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 7 11:05:02.110: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:05:02.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-zbg6r" for this suite. Jul 7 11:05:08.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:05:08.427: INFO: namespace: e2e-tests-gc-zbg6r, resource: bindings, ignored listing per whitelist Jul 7 11:05:08.446: INFO: namespace e2e-tests-gc-zbg6r deletion completed in 6.268617014s • [SLOW TEST:16.561 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:05:08.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 7 11:05:13.064: INFO: Successfully updated pod "pod-update-b84569e7-c041-11ea-9ad7-0242ac11001b" STEP: verifying the updated pod is in kubernetes Jul 7 11:05:13.075: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:05:13.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-vq9nh" for this suite. Jul 7 11:05:37.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:05:37.111: INFO: namespace: e2e-tests-pods-vq9nh, resource: bindings, ignored listing per whitelist Jul 7 11:05:37.164: INFO: namespace e2e-tests-pods-vq9nh deletion completed in 24.085688586s • [SLOW TEST:28.718 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:05:37.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name s-test-opt-del-c9860e5d-c041-11ea-9ad7-0242ac11001b STEP: Creating secret with name s-test-opt-upd-c9860ecd-c041-11ea-9ad7-0242ac11001b STEP: Creating the pod STEP: Deleting secret s-test-opt-del-c9860e5d-c041-11ea-9ad7-0242ac11001b STEP: Updating secret s-test-opt-upd-c9860ecd-c041-11ea-9ad7-0242ac11001b STEP: Creating secret with name s-test-opt-create-c9860ee8-c041-11ea-9ad7-0242ac11001b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:06:48.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-zwgth" for this suite. Jul 7 11:07:10.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:07:10.466: INFO: namespace: e2e-tests-secrets-zwgth, resource: bindings, ignored listing per whitelist Jul 7 11:07:10.487: INFO: namespace e2e-tests-secrets-zwgth deletion completed in 22.412421429s • [SLOW TEST:93.323 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:07:10.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 7 11:07:11.510: INFO: Creating ReplicaSet my-hostname-basic-01947e58-c042-11ea-9ad7-0242ac11001b Jul 7 11:07:11.523: INFO: Pod name my-hostname-basic-01947e58-c042-11ea-9ad7-0242ac11001b: Found 0 pods out of 1 Jul 7 11:07:16.552: INFO: Pod name my-hostname-basic-01947e58-c042-11ea-9ad7-0242ac11001b: Found 1 pods out of 1 Jul 7 11:07:16.552: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-01947e58-c042-11ea-9ad7-0242ac11001b" is running Jul 7 11:07:16.555: INFO: Pod "my-hostname-basic-01947e58-c042-11ea-9ad7-0242ac11001b-27prx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-07 11:07:11 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-07 11:07:14 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-07 11:07:14 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-07 11:07:11 +0000 UTC Reason: Message:}]) Jul 7 11:07:16.555: INFO: Trying to dial the pod Jul 7 11:07:21.568: INFO: Controller my-hostname-basic-01947e58-c042-11ea-9ad7-0242ac11001b: Got expected result from replica 1 [my-hostname-basic-01947e58-c042-11ea-9ad7-0242ac11001b-27prx]: "my-hostname-basic-01947e58-c042-11ea-9ad7-0242ac11001b-27prx", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:07:21.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-kq65k" for this suite. Jul 7 11:07:27.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:07:27.653: INFO: namespace: e2e-tests-replicaset-kq65k, resource: bindings, ignored listing per whitelist Jul 7 11:07:27.755: INFO: namespace e2e-tests-replicaset-kq65k deletion completed in 6.181865211s • [SLOW TEST:17.268 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:07:27.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jul 7 11:07:27.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t9rbm' Jul 7 11:07:28.212: INFO: stderr: "" Jul 7 11:07:28.212: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jul 7 11:07:29.217: INFO: Selector matched 1 pods for map[app:redis] Jul 7 11:07:29.217: INFO: Found 0 / 1 Jul 7 11:07:30.217: INFO: Selector matched 1 pods for map[app:redis] Jul 7 11:07:30.217: INFO: Found 0 / 1 Jul 7 11:07:31.217: INFO: Selector matched 1 pods for map[app:redis] Jul 7 11:07:31.217: INFO: Found 0 / 1 Jul 7 11:07:32.217: INFO: Selector matched 1 pods for map[app:redis] Jul 7 11:07:32.217: INFO: Found 1 / 1 Jul 7 11:07:32.217: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jul 7 11:07:32.221: INFO: Selector matched 1 pods for map[app:redis] Jul 7 11:07:32.221: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 7 11:07:32.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-8vskc --namespace=e2e-tests-kubectl-t9rbm -p {"metadata":{"annotations":{"x":"y"}}}' Jul 7 11:07:32.343: INFO: stderr: "" Jul 7 11:07:32.343: INFO: stdout: "pod/redis-master-8vskc patched\n" STEP: checking annotations Jul 7 11:07:32.358: INFO: Selector matched 1 pods for map[app:redis] Jul 7 11:07:32.358: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:07:32.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-t9rbm" for this suite. Jul 7 11:07:48.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:07:48.397: INFO: namespace: e2e-tests-kubectl-t9rbm, resource: bindings, ignored listing per whitelist Jul 7 11:07:48.456: INFO: namespace e2e-tests-kubectl-t9rbm deletion completed in 16.09435719s • [SLOW TEST:20.700 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:07:48.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 7 11:07:48.565: INFO: Waiting up to 5m0s for pod "pod-17a96940-c042-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-emptydir-mnncv" to be "success or failure" Jul 7 11:07:48.587: INFO: Pod "pod-17a96940-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.084255ms Jul 7 11:07:50.591: INFO: Pod "pod-17a96940-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025872373s Jul 7 11:07:52.596: INFO: Pod "pod-17a96940-c042-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030847527s STEP: Saw pod success Jul 7 11:07:52.596: INFO: Pod "pod-17a96940-c042-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 11:07:52.600: INFO: Trying to get logs from node hunter-worker pod pod-17a96940-c042-11ea-9ad7-0242ac11001b container test-container: STEP: delete the pod Jul 7 11:07:52.618: INFO: Waiting for pod pod-17a96940-c042-11ea-9ad7-0242ac11001b to disappear Jul 7 11:07:52.623: INFO: Pod pod-17a96940-c042-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:07:52.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-mnncv" for this suite. Jul 7 11:07:58.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:07:58.709: INFO: namespace: e2e-tests-emptydir-mnncv, resource: bindings, ignored listing per whitelist Jul 7 11:07:58.804: INFO: namespace e2e-tests-emptydir-mnncv deletion completed in 6.178456141s • [SLOW TEST:10.348 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:07:58.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 7 11:07:58.933: INFO: Waiting up to 5m0s for pod "pod-1dd5b8a2-c042-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-emptydir-s2gs5" to be "success or failure" Jul 7 11:07:58.937: INFO: Pod "pod-1dd5b8a2-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.409078ms Jul 7 11:08:00.942: INFO: Pod "pod-1dd5b8a2-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008467853s Jul 7 11:08:02.946: INFO: Pod "pod-1dd5b8a2-c042-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013185351s STEP: Saw pod success Jul 7 11:08:02.946: INFO: Pod "pod-1dd5b8a2-c042-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 11:08:02.949: INFO: Trying to get logs from node hunter-worker2 pod pod-1dd5b8a2-c042-11ea-9ad7-0242ac11001b container test-container: STEP: delete the pod Jul 7 11:08:03.088: INFO: Waiting for pod pod-1dd5b8a2-c042-11ea-9ad7-0242ac11001b to disappear Jul 7 11:08:03.106: INFO: Pod pod-1dd5b8a2-c042-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:08:03.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-s2gs5" for this suite. Jul 7 11:08:09.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:08:09.170: INFO: namespace: e2e-tests-emptydir-s2gs5, resource: bindings, ignored listing per whitelist Jul 7 11:08:09.217: INFO: namespace e2e-tests-emptydir-s2gs5 deletion completed in 6.10698169s • [SLOW TEST:10.412 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:08:09.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0707 11:08:49.826155 6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 7 11:08:49.826: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:08:49.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-qdqcj" for this suite. Jul 7 11:09:01.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:09:01.864: INFO: namespace: e2e-tests-gc-qdqcj, resource: bindings, ignored listing per whitelist Jul 7 11:09:01.921: INFO: namespace e2e-tests-gc-qdqcj deletion completed in 12.091086404s • [SLOW TEST:52.704 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:09:01.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-4383a63a-c042-11ea-9ad7-0242ac11001b STEP: Creating a pod to test consume secrets Jul 7 11:09:02.159: INFO: Waiting up to 5m0s for pod "pod-secrets-43845b7c-c042-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-secrets-rl7nj" to be "success or failure" Jul 7 11:09:02.176: INFO: Pod "pod-secrets-43845b7c-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.958283ms Jul 7 11:09:04.182: INFO: Pod "pod-secrets-43845b7c-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023445692s Jul 7 11:09:06.344: INFO: Pod "pod-secrets-43845b7c-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184496574s Jul 7 11:09:08.347: INFO: Pod "pod-secrets-43845b7c-c042-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.188360749s STEP: Saw pod success Jul 7 11:09:08.347: INFO: Pod "pod-secrets-43845b7c-c042-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 11:09:08.350: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-43845b7c-c042-11ea-9ad7-0242ac11001b container secret-volume-test: STEP: delete the pod Jul 7 11:09:08.452: INFO: Waiting for pod pod-secrets-43845b7c-c042-11ea-9ad7-0242ac11001b to disappear Jul 7 11:09:08.457: INFO: Pod pod-secrets-43845b7c-c042-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:09:08.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-rl7nj" for this suite. Jul 7 11:09:14.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:09:14.497: INFO: namespace: e2e-tests-secrets-rl7nj, resource: bindings, ignored listing per whitelist Jul 7 11:09:14.538: INFO: namespace e2e-tests-secrets-rl7nj deletion completed in 6.078065285s • [SLOW TEST:12.617 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:09:14.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:09:18.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-xw4bk" for this suite. Jul 7 11:09:24.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:09:25.017: INFO: namespace: e2e-tests-kubelet-test-xw4bk, resource: bindings, ignored listing per whitelist Jul 7 11:09:25.039: INFO: namespace e2e-tests-kubelet-test-xw4bk deletion completed in 6.278127719s • [SLOW TEST:10.501 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:09:25.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:09:31.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-k64t7" for this suite. Jul 7 11:09:37.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:09:37.987: INFO: namespace: e2e-tests-emptydir-wrapper-k64t7, resource: bindings, ignored listing per whitelist Jul 7 11:09:38.011: INFO: namespace e2e-tests-emptydir-wrapper-k64t7 deletion completed in 6.105099505s • [SLOW TEST:12.971 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:09:38.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jul 7 11:09:38.173: INFO: Waiting up to 5m0s for pod "downward-api-58fe8db3-c042-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-downward-api-n4b4l" to be "success or failure" Jul 7 11:09:38.183: INFO: Pod "downward-api-58fe8db3-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.046763ms Jul 7 11:09:40.186: INFO: Pod "downward-api-58fe8db3-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012765689s Jul 7 11:09:42.271: INFO: Pod "downward-api-58fe8db3-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09791907s Jul 7 11:09:44.276: INFO: Pod "downward-api-58fe8db3-c042-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.102992796s STEP: Saw pod success Jul 7 11:09:44.276: INFO: Pod "downward-api-58fe8db3-c042-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 11:09:44.280: INFO: Trying to get logs from node hunter-worker2 pod downward-api-58fe8db3-c042-11ea-9ad7-0242ac11001b container dapi-container: STEP: delete the pod Jul 7 11:09:44.381: INFO: Waiting for pod downward-api-58fe8db3-c042-11ea-9ad7-0242ac11001b to disappear Jul 7 11:09:44.401: INFO: Pod downward-api-58fe8db3-c042-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:09:44.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-n4b4l" for this suite. Jul 7 11:09:50.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:09:50.498: INFO: namespace: e2e-tests-downward-api-n4b4l, resource: bindings, ignored listing per whitelist Jul 7 11:09:50.514: INFO: namespace e2e-tests-downward-api-n4b4l deletion completed in 6.10719107s • [SLOW TEST:12.502 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:09:50.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-606ab2f1-c042-11ea-9ad7-0242ac11001b STEP: Creating a pod to test consume secrets Jul 7 11:09:50.667: INFO: Waiting up to 5m0s for pod "pod-secrets-6070f1d1-c042-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-secrets-t4gj9" to be "success or failure" Jul 7 11:09:50.676: INFO: Pod "pod-secrets-6070f1d1-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.285924ms Jul 7 11:09:52.703: INFO: Pod "pod-secrets-6070f1d1-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036073096s Jul 7 11:09:54.707: INFO: Pod "pod-secrets-6070f1d1-c042-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040235148s STEP: Saw pod success Jul 7 11:09:54.707: INFO: Pod "pod-secrets-6070f1d1-c042-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 11:09:54.710: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-6070f1d1-c042-11ea-9ad7-0242ac11001b container secret-volume-test: STEP: delete the pod Jul 7 11:09:54.730: INFO: Waiting for pod pod-secrets-6070f1d1-c042-11ea-9ad7-0242ac11001b to disappear Jul 7 11:09:54.734: INFO: Pod pod-secrets-6070f1d1-c042-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:09:54.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-t4gj9" for this suite. Jul 7 11:10:00.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:10:00.764: INFO: namespace: e2e-tests-secrets-t4gj9, resource: bindings, ignored listing per whitelist Jul 7 11:10:00.820: INFO: namespace e2e-tests-secrets-t4gj9 deletion completed in 6.082115822s • [SLOW TEST:10.305 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:10:00.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jul 7 11:10:00.936: INFO: Waiting up to 5m0s for pod "downward-api-668fe4b6-c042-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-downward-api-thfjf" to be "success or failure" Jul 7 11:10:00.952: INFO: Pod "downward-api-668fe4b6-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.032363ms Jul 7 11:10:02.956: INFO: Pod "downward-api-668fe4b6-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020218935s Jul 7 11:10:04.961: INFO: Pod "downward-api-668fe4b6-c042-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024925571s STEP: Saw pod success Jul 7 11:10:04.961: INFO: Pod "downward-api-668fe4b6-c042-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 11:10:04.964: INFO: Trying to get logs from node hunter-worker2 pod downward-api-668fe4b6-c042-11ea-9ad7-0242ac11001b container dapi-container: STEP: delete the pod Jul 7 11:10:04.983: INFO: Waiting for pod downward-api-668fe4b6-c042-11ea-9ad7-0242ac11001b to disappear Jul 7 11:10:04.988: INFO: Pod downward-api-668fe4b6-c042-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:10:04.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-thfjf" for this suite. Jul 7 11:10:11.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:10:11.086: INFO: namespace: e2e-tests-downward-api-thfjf, resource: bindings, ignored listing per whitelist Jul 7 11:10:11.096: INFO: namespace e2e-tests-downward-api-thfjf deletion completed in 6.105457648s • [SLOW TEST:10.276 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:10:11.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-6cb144f3-c042-11ea-9ad7-0242ac11001b STEP: Creating a pod to test consume configMaps Jul 7 11:10:11.241: INFO: Waiting up to 5m0s for pod "pod-configmaps-6cb24e5c-c042-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-configmap-bq8n2" to be "success or failure" Jul 7 11:10:11.284: INFO: Pod "pod-configmaps-6cb24e5c-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 42.197825ms Jul 7 11:10:13.356: INFO: Pod "pod-configmaps-6cb24e5c-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114030566s Jul 7 11:10:15.422: INFO: Pod "pod-configmaps-6cb24e5c-c042-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.180210437s STEP: Saw pod success Jul 7 11:10:15.422: INFO: Pod "pod-configmaps-6cb24e5c-c042-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 11:10:15.424: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-6cb24e5c-c042-11ea-9ad7-0242ac11001b container configmap-volume-test: STEP: delete the pod Jul 7 11:10:15.476: INFO: Waiting for pod pod-configmaps-6cb24e5c-c042-11ea-9ad7-0242ac11001b to disappear Jul 7 11:10:15.503: INFO: Pod pod-configmaps-6cb24e5c-c042-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:10:15.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-bq8n2" for this suite. Jul 7 11:10:23.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:10:23.598: INFO: namespace: e2e-tests-configmap-bq8n2, resource: bindings, ignored listing per whitelist Jul 7 11:10:23.603: INFO: namespace e2e-tests-configmap-bq8n2 deletion completed in 8.095441923s • [SLOW TEST:12.506 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:10:23.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 7 11:10:23.701: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jul 7 11:10:23.743: INFO: Pod name sample-pod: Found 0 pods out of 1 Jul 7 11:10:28.748: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 7 11:10:28.748: INFO: Creating deployment "test-rolling-update-deployment" Jul 7 11:10:28.752: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jul 7 11:10:28.768: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jul 7 11:10:30.774: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jul 7 11:10:30.776: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729717029, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729717029, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729717029, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729717028, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 11:10:32.781: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729717029, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729717029, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729717029, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729717028, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 7 11:10:34.781: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jul 7 11:10:34.791: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-gcjrl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gcjrl/deployments/test-rolling-update-deployment,UID:7724db9f-c042-11ea-a300-0242ac110004,ResourceVersion:593299,Generation:1,CreationTimestamp:2020-07-07 11:10:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-07-07 11:10:29 +0000 UTC 2020-07-07 11:10:29 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-07-07 11:10:33 +0000 UTC 2020-07-07 11:10:28 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jul 7 11:10:34.795: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-gcjrl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gcjrl/replicasets/test-rolling-update-deployment-75db98fb4c,UID:7728490f-c042-11ea-a300-0242ac110004,ResourceVersion:593290,Generation:1,CreationTimestamp:2020-07-07 11:10:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 7724db9f-c042-11ea-a300-0242ac110004 0xc001f13617 0xc001f13618}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jul 7 11:10:34.795: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jul 7 11:10:34.795: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-gcjrl,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-gcjrl/replicasets/test-rolling-update-controller,UID:7422b6ed-c042-11ea-a300-0242ac110004,ResourceVersion:593298,Generation:2,CreationTimestamp:2020-07-07 11:10:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 7724db9f-c042-11ea-a300-0242ac110004 0xc001f13557 0xc001f13558}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 7 11:10:34.799: INFO: Pod "test-rolling-update-deployment-75db98fb4c-mc9fn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-mc9fn,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-gcjrl,SelfLink:/api/v1/namespaces/e2e-tests-deployment-gcjrl/pods/test-rolling-update-deployment-75db98fb4c-mc9fn,UID:772b0883-c042-11ea-a300-0242ac110004,ResourceVersion:593289,Generation:0,CreationTimestamp:2020-07-07 11:10:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 7728490f-c042-11ea-a300-0242ac110004 0xc001ce4c77 0xc001ce4c78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-l6ll2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l6ll2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-l6ll2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ce4d10} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ce4d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:10:29 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:10:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:10:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:10:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.1.5,StartTime:2020-07-07 11:10:29 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-07-07 11:10:32 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://c350c164e794b758d201776111acdaddd392247921f0273d5f712e81fdc2e4c4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:10:34.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-gcjrl" for this suite. Jul 7 11:10:42.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:10:42.885: INFO: namespace: e2e-tests-deployment-gcjrl, resource: bindings, ignored listing per whitelist Jul 7 11:10:42.908: INFO: namespace e2e-tests-deployment-gcjrl deletion completed in 8.10472459s • [SLOW TEST:19.305 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:10:42.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-7fa5b797-c042-11ea-9ad7-0242ac11001b STEP: Creating a pod to test consume configMaps Jul 7 11:10:43.031: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7fa6d796-c042-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-whc82" to be "success or failure" Jul 7 11:10:43.050: INFO: Pod "pod-projected-configmaps-7fa6d796-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.483234ms Jul 7 11:10:45.055: INFO: Pod "pod-projected-configmaps-7fa6d796-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023706376s Jul 7 11:10:47.075: INFO: Pod "pod-projected-configmaps-7fa6d796-c042-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044113307s STEP: Saw pod success Jul 7 11:10:47.075: INFO: Pod "pod-projected-configmaps-7fa6d796-c042-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 11:10:47.078: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-7fa6d796-c042-11ea-9ad7-0242ac11001b container projected-configmap-volume-test: STEP: delete the pod Jul 7 11:10:47.119: INFO: Waiting for pod pod-projected-configmaps-7fa6d796-c042-11ea-9ad7-0242ac11001b to disappear Jul 7 11:10:47.155: INFO: Pod pod-projected-configmaps-7fa6d796-c042-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:10:47.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-whc82" for this suite. Jul 7 11:10:53.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:10:53.192: INFO: namespace: e2e-tests-projected-whc82, resource: bindings, ignored listing per whitelist Jul 7 11:10:53.254: INFO: namespace e2e-tests-projected-whc82 deletion completed in 6.095267666s • [SLOW TEST:10.346 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:10:53.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 7 11:10:53.400: INFO: Waiting up to 5m0s for pod "pod-85d54afc-c042-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-emptydir-9m7vw" to be "success or failure" Jul 7 11:10:53.407: INFO: Pod "pod-85d54afc-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.157912ms Jul 7 11:10:55.420: INFO: Pod "pod-85d54afc-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020379585s Jul 7 11:10:57.643: INFO: Pod "pod-85d54afc-c042-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.243628965s STEP: Saw pod success Jul 7 11:10:57.644: INFO: Pod "pod-85d54afc-c042-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 11:10:57.646: INFO: Trying to get logs from node hunter-worker2 pod pod-85d54afc-c042-11ea-9ad7-0242ac11001b container test-container: STEP: delete the pod Jul 7 11:10:57.693: INFO: Waiting for pod pod-85d54afc-c042-11ea-9ad7-0242ac11001b to disappear Jul 7 11:10:57.695: INFO: Pod pod-85d54afc-c042-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:10:57.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-9m7vw" for this suite. Jul 7 11:11:03.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:11:03.883: INFO: namespace: e2e-tests-emptydir-9m7vw, resource: bindings, ignored listing per whitelist Jul 7 11:11:03.883: INFO: namespace e2e-tests-emptydir-9m7vw deletion completed in 6.183461248s • [SLOW TEST:10.628 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:11:03.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Starting the proxy Jul 7 11:11:03.994: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix795906019/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:11:04.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ckzph" for this suite. Jul 7 11:11:10.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:11:10.100: INFO: namespace: e2e-tests-kubectl-ckzph, resource: bindings, ignored listing per whitelist Jul 7 11:11:10.152: INFO: namespace e2e-tests-kubectl-ckzph deletion completed in 6.088818342s • [SLOW TEST:6.269 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:11:10.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-kwwqh STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-kwwqh to expose endpoints map[] Jul 7 11:11:10.363: INFO: Get endpoints failed (36.16049ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jul 7 11:11:11.367: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-kwwqh exposes endpoints map[] (1.040299589s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-kwwqh STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-kwwqh to expose endpoints map[pod1:[100]] Jul 7 11:11:14.404: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-kwwqh exposes endpoints map[pod1:[100]] (3.029334868s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-kwwqh STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-kwwqh to expose endpoints map[pod1:[100] pod2:[101]] Jul 7 11:11:18.817: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-kwwqh exposes endpoints map[pod1:[100] pod2:[101]] (4.409109429s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-kwwqh STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-kwwqh to expose endpoints map[pod2:[101]] Jul 7 11:11:20.844: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-kwwqh exposes endpoints map[pod2:[101]] (2.023120042s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-kwwqh STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-kwwqh to expose endpoints map[] Jul 7 11:11:21.892: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-kwwqh exposes endpoints map[] (1.043479909s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:11:21.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-kwwqh" for this suite. Jul 7 11:11:44.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:11:44.077: INFO: namespace: e2e-tests-services-kwwqh, resource: bindings, ignored listing per whitelist Jul 7 11:11:44.142: INFO: namespace e2e-tests-services-kwwqh deletion completed in 22.10256149s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:33.990 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:11:44.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jul 7 11:11:48.822: INFO: Successfully updated pod "labelsupdatea428fbf3-c042-11ea-9ad7-0242ac11001b" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:11:50.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-pc6n5" for this suite. Jul 7 11:12:12.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:12:12.905: INFO: namespace: e2e-tests-projected-pc6n5, resource: bindings, ignored listing per whitelist Jul 7 11:12:12.937: INFO: namespace e2e-tests-projected-pc6n5 deletion completed in 22.094306064s • [SLOW TEST:28.794 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:12:12.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 7 11:12:13.071: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b5522a2e-c042-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-downward-api-tpkcq" to be "success or failure" Jul 7 11:12:13.104: INFO: Pod "downwardapi-volume-b5522a2e-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.667537ms Jul 7 11:12:15.108: INFO: Pod "downwardapi-volume-b5522a2e-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036575454s Jul 7 11:12:17.112: INFO: Pod "downwardapi-volume-b5522a2e-c042-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041005713s STEP: Saw pod success Jul 7 11:12:17.112: INFO: Pod "downwardapi-volume-b5522a2e-c042-11ea-9ad7-0242ac11001b" satisfied condition "success or failure" Jul 7 11:12:17.115: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-b5522a2e-c042-11ea-9ad7-0242ac11001b container client-container: STEP: delete the pod Jul 7 11:12:17.184: INFO: Waiting for pod downwardapi-volume-b5522a2e-c042-11ea-9ad7-0242ac11001b to disappear Jul 7 11:12:17.199: INFO: Pod downwardapi-volume-b5522a2e-c042-11ea-9ad7-0242ac11001b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:12:17.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-tpkcq" for this suite. Jul 7 11:12:23.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:12:23.249: INFO: namespace: e2e-tests-downward-api-tpkcq, resource: bindings, ignored listing per whitelist Jul 7 11:12:23.298: INFO: namespace e2e-tests-downward-api-tpkcq deletion completed in 6.094319342s • [SLOW TEST:10.361 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:12:23.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jul 7 11:12:28.003: INFO: Successfully updated pod "labelsupdatebb7a0ddb-c042-11ea-9ad7-0242ac11001b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 7 11:12:30.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-n2tm6" for this suite. Jul 7 11:12:48.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 7 11:12:48.112: INFO: namespace: e2e-tests-downward-api-n2tm6, resource: bindings, ignored listing per whitelist Jul 7 11:12:48.186: INFO: namespace e2e-tests-downward-api-n2tm6 deletion completed in 18.132767088s • [SLOW TEST:24.888 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 7 11:12:48.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 7 11:12:48.353: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-ce275742-c042-11ea-9ad7-0242ac11001b
STEP: Creating a pod to test consume secrets
Jul  7 11:12:54.784: INFO: Waiting up to 5m0s for pod "pod-secrets-ce2eb9f6-c042-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-secrets-8hj7l" to be "success or failure"
Jul  7 11:12:54.788: INFO: Pod "pod-secrets-ce2eb9f6-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.461983ms
Jul  7 11:12:56.791: INFO: Pod "pod-secrets-ce2eb9f6-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00675238s
Jul  7 11:12:58.796: INFO: Pod "pod-secrets-ce2eb9f6-c042-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01127044s
STEP: Saw pod success
Jul  7 11:12:58.796: INFO: Pod "pod-secrets-ce2eb9f6-c042-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:12:58.799: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-ce2eb9f6-c042-11ea-9ad7-0242ac11001b container secret-volume-test: 
STEP: delete the pod
Jul  7 11:12:59.100: INFO: Waiting for pod pod-secrets-ce2eb9f6-c042-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:12:59.153: INFO: Pod pod-secrets-ce2eb9f6-c042-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:12:59.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-8hj7l" for this suite.
Jul  7 11:13:05.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:13:05.248: INFO: namespace: e2e-tests-secrets-8hj7l, resource: bindings, ignored listing per whitelist
Jul  7 11:13:05.281: INFO: namespace e2e-tests-secrets-8hj7l deletion completed in 6.123851926s

• [SLOW TEST:10.751 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:13:05.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-fmb9h/secret-test-d482df43-c042-11ea-9ad7-0242ac11001b
STEP: Creating a pod to test consume secrets
Jul  7 11:13:05.406: INFO: Waiting up to 5m0s for pod "pod-configmaps-d483a863-c042-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-secrets-fmb9h" to be "success or failure"
Jul  7 11:13:05.411: INFO: Pod "pod-configmaps-d483a863-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.371006ms
Jul  7 11:13:07.555: INFO: Pod "pod-configmaps-d483a863-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148267428s
Jul  7 11:13:09.597: INFO: Pod "pod-configmaps-d483a863-c042-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.19056641s
STEP: Saw pod success
Jul  7 11:13:09.597: INFO: Pod "pod-configmaps-d483a863-c042-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:13:09.600: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-d483a863-c042-11ea-9ad7-0242ac11001b container env-test: 
STEP: delete the pod
Jul  7 11:13:09.644: INFO: Waiting for pod pod-configmaps-d483a863-c042-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:13:09.908: INFO: Pod pod-configmaps-d483a863-c042-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:13:09.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-fmb9h" for this suite.
Jul  7 11:13:16.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:13:16.193: INFO: namespace: e2e-tests-secrets-fmb9h, resource: bindings, ignored listing per whitelist
Jul  7 11:13:16.224: INFO: namespace e2e-tests-secrets-fmb9h deletion completed in 6.102345936s

• [SLOW TEST:10.942 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:13:16.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul  7 11:13:16.352: INFO: Waiting up to 5m0s for pod "pod-db08ab75-c042-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-emptydir-w7c5t" to be "success or failure"
Jul  7 11:13:16.356: INFO: Pod "pod-db08ab75-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.207303ms
Jul  7 11:13:18.360: INFO: Pod "pod-db08ab75-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007380339s
Jul  7 11:13:20.364: INFO: Pod "pod-db08ab75-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011632349s
Jul  7 11:13:22.375: INFO: Pod "pod-db08ab75-c042-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022514393s
STEP: Saw pod success
Jul  7 11:13:22.375: INFO: Pod "pod-db08ab75-c042-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:13:22.377: INFO: Trying to get logs from node hunter-worker2 pod pod-db08ab75-c042-11ea-9ad7-0242ac11001b container test-container: 
STEP: delete the pod
Jul  7 11:13:22.416: INFO: Waiting for pod pod-db08ab75-c042-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:13:22.435: INFO: Pod pod-db08ab75-c042-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:13:22.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-w7c5t" for this suite.
Jul  7 11:13:28.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:13:28.681: INFO: namespace: e2e-tests-emptydir-w7c5t, resource: bindings, ignored listing per whitelist
Jul  7 11:13:28.732: INFO: namespace e2e-tests-emptydir-w7c5t deletion completed in 6.294458155s

• [SLOW TEST:12.508 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:13:28.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  7 11:13:29.198: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e2ae9baf-c042-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-phr94" to be "success or failure"
Jul  7 11:13:29.429: INFO: Pod "downwardapi-volume-e2ae9baf-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 231.728935ms
Jul  7 11:13:31.433: INFO: Pod "downwardapi-volume-e2ae9baf-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235316834s
Jul  7 11:13:33.513: INFO: Pod "downwardapi-volume-e2ae9baf-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315429167s
Jul  7 11:13:35.518: INFO: Pod "downwardapi-volume-e2ae9baf-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.320068509s
Jul  7 11:13:37.531: INFO: Pod "downwardapi-volume-e2ae9baf-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.333237862s
Jul  7 11:13:40.071: INFO: Pod "downwardapi-volume-e2ae9baf-c042-11ea-9ad7-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 10.873031104s
Jul  7 11:13:42.075: INFO: Pod "downwardapi-volume-e2ae9baf-c042-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.877114089s
STEP: Saw pod success
Jul  7 11:13:42.075: INFO: Pod "downwardapi-volume-e2ae9baf-c042-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:13:42.077: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-e2ae9baf-c042-11ea-9ad7-0242ac11001b container client-container: 
STEP: delete the pod
Jul  7 11:13:42.125: INFO: Waiting for pod downwardapi-volume-e2ae9baf-c042-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:13:42.280: INFO: Pod downwardapi-volume-e2ae9baf-c042-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:13:42.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-phr94" for this suite.
Jul  7 11:13:48.508: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:13:48.531: INFO: namespace: e2e-tests-projected-phr94, resource: bindings, ignored listing per whitelist
Jul  7 11:13:48.587: INFO: namespace e2e-tests-projected-phr94 deletion completed in 6.302546869s

• [SLOW TEST:19.855 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:13:48.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-ee52dcb1-c042-11ea-9ad7-0242ac11001b
STEP: Creating a pod to test consume configMaps
Jul  7 11:13:48.761: INFO: Waiting up to 5m0s for pod "pod-configmaps-ee539120-c042-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-configmap-x68b5" to be "success or failure"
Jul  7 11:13:48.764: INFO: Pod "pod-configmaps-ee539120-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.783495ms
Jul  7 11:13:50.769: INFO: Pod "pod-configmaps-ee539120-c042-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007760336s
Jul  7 11:13:52.774: INFO: Pod "pod-configmaps-ee539120-c042-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012033402s
STEP: Saw pod success
Jul  7 11:13:52.774: INFO: Pod "pod-configmaps-ee539120-c042-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:13:52.776: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-ee539120-c042-11ea-9ad7-0242ac11001b container configmap-volume-test: 
STEP: delete the pod
Jul  7 11:13:53.036: INFO: Waiting for pod pod-configmaps-ee539120-c042-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:13:53.053: INFO: Pod pod-configmaps-ee539120-c042-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:13:53.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-x68b5" for this suite.
Jul  7 11:13:59.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:13:59.077: INFO: namespace: e2e-tests-configmap-x68b5, resource: bindings, ignored listing per whitelist
Jul  7 11:13:59.139: INFO: namespace e2e-tests-configmap-x68b5 deletion completed in 6.082431045s

• [SLOW TEST:10.552 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:13:59.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jul  7 11:13:59.336: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:13:59.338: INFO: Number of nodes with available pods: 0
Jul  7 11:13:59.338: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:14:00.343: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:14:00.346: INFO: Number of nodes with available pods: 0
Jul  7 11:14:00.346: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:14:01.344: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:14:01.347: INFO: Number of nodes with available pods: 0
Jul  7 11:14:01.347: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:14:02.344: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:14:02.347: INFO: Number of nodes with available pods: 0
Jul  7 11:14:02.347: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:14:03.344: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:14:03.348: INFO: Number of nodes with available pods: 0
Jul  7 11:14:03.348: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:14:04.343: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:14:04.347: INFO: Number of nodes with available pods: 2
Jul  7 11:14:04.348: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jul  7 11:14:04.379: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:14:04.394: INFO: Number of nodes with available pods: 2
Jul  7 11:14:04.394: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-j65hn, will wait for the garbage collector to delete the pods
Jul  7 11:14:05.467: INFO: Deleting DaemonSet.extensions daemon-set took: 6.136813ms
Jul  7 11:14:05.667: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.233167ms
Jul  7 11:14:13.777: INFO: Number of nodes with available pods: 0
Jul  7 11:14:13.777: INFO: Number of running nodes: 0, number of available pods: 0
Jul  7 11:14:13.781: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-j65hn/daemonsets","resourceVersion":"594126"},"items":null}

Jul  7 11:14:13.784: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-j65hn/pods","resourceVersion":"594126"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:14:13.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-j65hn" for this suite.
Jul  7 11:14:19.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:14:19.891: INFO: namespace: e2e-tests-daemonsets-j65hn, resource: bindings, ignored listing per whitelist
Jul  7 11:14:19.924: INFO: namespace e2e-tests-daemonsets-j65hn deletion completed in 6.088845772s

• [SLOW TEST:20.785 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:14:19.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:14:27.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-6n5p9" for this suite.
Jul  7 11:14:49.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:14:49.159: INFO: namespace: e2e-tests-replication-controller-6n5p9, resource: bindings, ignored listing per whitelist
Jul  7 11:14:49.250: INFO: namespace e2e-tests-replication-controller-6n5p9 deletion completed in 22.116565413s

• [SLOW TEST:29.326 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:14:49.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-127f66ba-c043-11ea-9ad7-0242ac11001b
STEP: Creating a pod to test consume secrets
Jul  7 11:14:49.403: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-127fe6a8-c043-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-ptfhp" to be "success or failure"
Jul  7 11:14:49.422: INFO: Pod "pod-projected-secrets-127fe6a8-c043-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.851979ms
Jul  7 11:14:51.472: INFO: Pod "pod-projected-secrets-127fe6a8-c043-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069062415s
Jul  7 11:14:53.479: INFO: Pod "pod-projected-secrets-127fe6a8-c043-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076287358s
STEP: Saw pod success
Jul  7 11:14:53.479: INFO: Pod "pod-projected-secrets-127fe6a8-c043-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:14:53.487: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-127fe6a8-c043-11ea-9ad7-0242ac11001b container secret-volume-test: 
STEP: delete the pod
Jul  7 11:14:53.622: INFO: Waiting for pod pod-projected-secrets-127fe6a8-c043-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:14:53.628: INFO: Pod pod-projected-secrets-127fe6a8-c043-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:14:53.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ptfhp" for this suite.
Jul  7 11:15:01.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:15:01.729: INFO: namespace: e2e-tests-projected-ptfhp, resource: bindings, ignored listing per whitelist
Jul  7 11:15:01.772: INFO: namespace e2e-tests-projected-ptfhp deletion completed in 8.140137798s

• [SLOW TEST:12.522 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:15:01.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jul  7 11:15:01.871: INFO: Waiting up to 5m0s for pod "client-containers-19ecc76e-c043-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-containers-wzqk4" to be "success or failure"
Jul  7 11:15:01.874: INFO: Pod "client-containers-19ecc76e-c043-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.392051ms
Jul  7 11:15:03.878: INFO: Pod "client-containers-19ecc76e-c043-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007576745s
Jul  7 11:15:05.882: INFO: Pod "client-containers-19ecc76e-c043-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011395461s
STEP: Saw pod success
Jul  7 11:15:05.882: INFO: Pod "client-containers-19ecc76e-c043-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:15:05.885: INFO: Trying to get logs from node hunter-worker pod client-containers-19ecc76e-c043-11ea-9ad7-0242ac11001b container test-container: 
STEP: delete the pod
Jul  7 11:15:06.074: INFO: Waiting for pod client-containers-19ecc76e-c043-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:15:06.120: INFO: Pod client-containers-19ecc76e-c043-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:15:06.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-wzqk4" for this suite.
Jul  7 11:15:12.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:15:12.189: INFO: namespace: e2e-tests-containers-wzqk4, resource: bindings, ignored listing per whitelist
Jul  7 11:15:12.255: INFO: namespace e2e-tests-containers-wzqk4 deletion completed in 6.130686707s

• [SLOW TEST:10.483 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:15:12.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul  7 11:15:12.385: INFO: Waiting up to 5m0s for pod "pod-202e6717-c043-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-emptydir-hrtb4" to be "success or failure"
Jul  7 11:15:12.395: INFO: Pod "pod-202e6717-c043-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.032316ms
Jul  7 11:15:14.399: INFO: Pod "pod-202e6717-c043-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014360541s
Jul  7 11:15:16.404: INFO: Pod "pod-202e6717-c043-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018481937s
STEP: Saw pod success
Jul  7 11:15:16.404: INFO: Pod "pod-202e6717-c043-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:15:16.406: INFO: Trying to get logs from node hunter-worker2 pod pod-202e6717-c043-11ea-9ad7-0242ac11001b container test-container: 
STEP: delete the pod
Jul  7 11:15:16.468: INFO: Waiting for pod pod-202e6717-c043-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:15:16.485: INFO: Pod pod-202e6717-c043-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:15:16.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-hrtb4" for this suite.
Jul  7 11:15:22.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:15:22.528: INFO: namespace: e2e-tests-emptydir-hrtb4, resource: bindings, ignored listing per whitelist
Jul  7 11:15:22.637: INFO: namespace e2e-tests-emptydir-hrtb4 deletion completed in 6.147619984s

• [SLOW TEST:10.382 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:15:22.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  7 11:15:22.769: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2663aa4d-c043-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-downward-api-vfczx" to be "success or failure"
Jul  7 11:15:22.787: INFO: Pod "downwardapi-volume-2663aa4d-c043-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.361854ms
Jul  7 11:15:24.791: INFO: Pod "downwardapi-volume-2663aa4d-c043-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022006394s
Jul  7 11:15:26.796: INFO: Pod "downwardapi-volume-2663aa4d-c043-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027067758s
STEP: Saw pod success
Jul  7 11:15:26.796: INFO: Pod "downwardapi-volume-2663aa4d-c043-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:15:26.800: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-2663aa4d-c043-11ea-9ad7-0242ac11001b container client-container: 
STEP: delete the pod
Jul  7 11:15:26.984: INFO: Waiting for pod downwardapi-volume-2663aa4d-c043-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:15:27.119: INFO: Pod downwardapi-volume-2663aa4d-c043-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:15:27.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vfczx" for this suite.
Jul  7 11:15:33.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:15:33.217: INFO: namespace: e2e-tests-downward-api-vfczx, resource: bindings, ignored listing per whitelist
Jul  7 11:15:33.238: INFO: namespace e2e-tests-downward-api-vfczx deletion completed in 6.114689699s

• [SLOW TEST:10.601 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:15:33.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jul  7 11:15:47.762: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-cxcg7 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 11:15:47.762: INFO: >>> kubeConfig: /root/.kube/config
I0707 11:15:47.798740       6 log.go:172] (0xc000a27a20) (0xc00041c500) Create stream
I0707 11:15:47.798771       6 log.go:172] (0xc000a27a20) (0xc00041c500) Stream added, broadcasting: 1
I0707 11:15:47.802298       6 log.go:172] (0xc000a27a20) Reply frame received for 1
I0707 11:15:47.802330       6 log.go:172] (0xc000a27a20) (0xc001813cc0) Create stream
I0707 11:15:47.802344       6 log.go:172] (0xc000a27a20) (0xc001813cc0) Stream added, broadcasting: 3
I0707 11:15:47.803140       6 log.go:172] (0xc000a27a20) Reply frame received for 3
I0707 11:15:47.803166       6 log.go:172] (0xc000a27a20) (0xc00041c640) Create stream
I0707 11:15:47.803178       6 log.go:172] (0xc000a27a20) (0xc00041c640) Stream added, broadcasting: 5
I0707 11:15:47.804002       6 log.go:172] (0xc000a27a20) Reply frame received for 5
I0707 11:15:47.895120       6 log.go:172] (0xc000a27a20) Data frame received for 5
I0707 11:15:47.895158       6 log.go:172] (0xc000a27a20) Data frame received for 3
I0707 11:15:47.895206       6 log.go:172] (0xc001813cc0) (3) Data frame handling
I0707 11:15:47.895239       6 log.go:172] (0xc001813cc0) (3) Data frame sent
I0707 11:15:47.895259       6 log.go:172] (0xc000a27a20) Data frame received for 3
I0707 11:15:47.895271       6 log.go:172] (0xc001813cc0) (3) Data frame handling
I0707 11:15:47.895289       6 log.go:172] (0xc00041c640) (5) Data frame handling
I0707 11:15:47.896517       6 log.go:172] (0xc000a27a20) Data frame received for 1
I0707 11:15:47.896558       6 log.go:172] (0xc00041c500) (1) Data frame handling
I0707 11:15:47.896590       6 log.go:172] (0xc00041c500) (1) Data frame sent
I0707 11:15:47.896616       6 log.go:172] (0xc000a27a20) (0xc00041c500) Stream removed, broadcasting: 1
I0707 11:15:47.896705       6 log.go:172] (0xc000a27a20) Go away received
I0707 11:15:47.896825       6 log.go:172] (0xc000a27a20) (0xc00041c500) Stream removed, broadcasting: 1
I0707 11:15:47.896859       6 log.go:172] (0xc000a27a20) (0xc001813cc0) Stream removed, broadcasting: 3
I0707 11:15:47.896880       6 log.go:172] (0xc000a27a20) (0xc00041c640) Stream removed, broadcasting: 5
Jul  7 11:15:47.896: INFO: Exec stderr: ""
Jul  7 11:15:47.896: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-cxcg7 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 11:15:47.896: INFO: >>> kubeConfig: /root/.kube/config
I0707 11:15:47.931473       6 log.go:172] (0xc000a27ef0) (0xc00041cbe0) Create stream
I0707 11:15:47.931507       6 log.go:172] (0xc000a27ef0) (0xc00041cbe0) Stream added, broadcasting: 1
I0707 11:15:47.934168       6 log.go:172] (0xc000a27ef0) Reply frame received for 1
I0707 11:15:47.934226       6 log.go:172] (0xc000a27ef0) (0xc0021a81e0) Create stream
I0707 11:15:47.934244       6 log.go:172] (0xc000a27ef0) (0xc0021a81e0) Stream added, broadcasting: 3
I0707 11:15:47.935145       6 log.go:172] (0xc000a27ef0) Reply frame received for 3
I0707 11:15:47.935176       6 log.go:172] (0xc000a27ef0) (0xc001813d60) Create stream
I0707 11:15:47.935187       6 log.go:172] (0xc000a27ef0) (0xc001813d60) Stream added, broadcasting: 5
I0707 11:15:47.936180       6 log.go:172] (0xc000a27ef0) Reply frame received for 5
I0707 11:15:48.000052       6 log.go:172] (0xc000a27ef0) Data frame received for 5
I0707 11:15:48.000075       6 log.go:172] (0xc001813d60) (5) Data frame handling
I0707 11:15:48.000095       6 log.go:172] (0xc000a27ef0) Data frame received for 3
I0707 11:15:48.000128       6 log.go:172] (0xc0021a81e0) (3) Data frame handling
I0707 11:15:48.000161       6 log.go:172] (0xc0021a81e0) (3) Data frame sent
I0707 11:15:48.000183       6 log.go:172] (0xc000a27ef0) Data frame received for 3
I0707 11:15:48.000200       6 log.go:172] (0xc0021a81e0) (3) Data frame handling
I0707 11:15:48.002056       6 log.go:172] (0xc000a27ef0) Data frame received for 1
I0707 11:15:48.002115       6 log.go:172] (0xc00041cbe0) (1) Data frame handling
I0707 11:15:48.002144       6 log.go:172] (0xc00041cbe0) (1) Data frame sent
I0707 11:15:48.002168       6 log.go:172] (0xc000a27ef0) (0xc00041cbe0) Stream removed, broadcasting: 1
I0707 11:15:48.002193       6 log.go:172] (0xc000a27ef0) Go away received
I0707 11:15:48.002335       6 log.go:172] (0xc000a27ef0) (0xc00041cbe0) Stream removed, broadcasting: 1
I0707 11:15:48.002362       6 log.go:172] (0xc000a27ef0) (0xc0021a81e0) Stream removed, broadcasting: 3
I0707 11:15:48.002388       6 log.go:172] (0xc000a27ef0) (0xc001813d60) Stream removed, broadcasting: 5
Jul  7 11:15:48.002: INFO: Exec stderr: ""
Jul  7 11:15:48.002: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-cxcg7 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 11:15:48.002: INFO: >>> kubeConfig: /root/.kube/config
I0707 11:15:48.036114       6 log.go:172] (0xc000947ef0) (0xc00174a6e0) Create stream
I0707 11:15:48.036139       6 log.go:172] (0xc000947ef0) (0xc00174a6e0) Stream added, broadcasting: 1
I0707 11:15:48.038329       6 log.go:172] (0xc000947ef0) Reply frame received for 1
I0707 11:15:48.038363       6 log.go:172] (0xc000947ef0) (0xc001813e00) Create stream
I0707 11:15:48.038375       6 log.go:172] (0xc000947ef0) (0xc001813e00) Stream added, broadcasting: 3
I0707 11:15:48.039250       6 log.go:172] (0xc000947ef0) Reply frame received for 3
I0707 11:15:48.039292       6 log.go:172] (0xc000947ef0) (0xc00041cd20) Create stream
I0707 11:15:48.039311       6 log.go:172] (0xc000947ef0) (0xc00041cd20) Stream added, broadcasting: 5
I0707 11:15:48.040232       6 log.go:172] (0xc000947ef0) Reply frame received for 5
I0707 11:15:48.100015       6 log.go:172] (0xc000947ef0) Data frame received for 5
I0707 11:15:48.100046       6 log.go:172] (0xc00041cd20) (5) Data frame handling
I0707 11:15:48.100069       6 log.go:172] (0xc000947ef0) Data frame received for 3
I0707 11:15:48.100087       6 log.go:172] (0xc001813e00) (3) Data frame handling
I0707 11:15:48.100099       6 log.go:172] (0xc001813e00) (3) Data frame sent
I0707 11:15:48.100106       6 log.go:172] (0xc000947ef0) Data frame received for 3
I0707 11:15:48.100119       6 log.go:172] (0xc001813e00) (3) Data frame handling
I0707 11:15:48.101717       6 log.go:172] (0xc000947ef0) Data frame received for 1
I0707 11:15:48.101751       6 log.go:172] (0xc00174a6e0) (1) Data frame handling
I0707 11:15:48.101785       6 log.go:172] (0xc00174a6e0) (1) Data frame sent
I0707 11:15:48.101811       6 log.go:172] (0xc000947ef0) (0xc00174a6e0) Stream removed, broadcasting: 1
I0707 11:15:48.101842       6 log.go:172] (0xc000947ef0) Go away received
I0707 11:15:48.101931       6 log.go:172] (0xc000947ef0) (0xc00174a6e0) Stream removed, broadcasting: 1
I0707 11:15:48.101953       6 log.go:172] (0xc000947ef0) (0xc001813e00) Stream removed, broadcasting: 3
I0707 11:15:48.101966       6 log.go:172] (0xc000947ef0) (0xc00041cd20) Stream removed, broadcasting: 5
Jul  7 11:15:48.101: INFO: Exec stderr: ""
Jul  7 11:15:48.101: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-cxcg7 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 11:15:48.102: INFO: >>> kubeConfig: /root/.kube/config
I0707 11:15:48.130879       6 log.go:172] (0xc001e72420) (0xc00041d220) Create stream
I0707 11:15:48.130915       6 log.go:172] (0xc001e72420) (0xc00041d220) Stream added, broadcasting: 1
I0707 11:15:48.134130       6 log.go:172] (0xc001e72420) Reply frame received for 1
I0707 11:15:48.134166       6 log.go:172] (0xc001e72420) (0xc0018d03c0) Create stream
I0707 11:15:48.134178       6 log.go:172] (0xc001e72420) (0xc0018d03c0) Stream added, broadcasting: 3
I0707 11:15:48.134978       6 log.go:172] (0xc001e72420) Reply frame received for 3
I0707 11:15:48.135005       6 log.go:172] (0xc001e72420) (0xc0021a8280) Create stream
I0707 11:15:48.135014       6 log.go:172] (0xc001e72420) (0xc0021a8280) Stream added, broadcasting: 5
I0707 11:15:48.135764       6 log.go:172] (0xc001e72420) Reply frame received for 5
I0707 11:15:48.202125       6 log.go:172] (0xc001e72420) Data frame received for 3
I0707 11:15:48.202159       6 log.go:172] (0xc0018d03c0) (3) Data frame handling
I0707 11:15:48.202168       6 log.go:172] (0xc0018d03c0) (3) Data frame sent
I0707 11:15:48.202173       6 log.go:172] (0xc001e72420) Data frame received for 3
I0707 11:15:48.202179       6 log.go:172] (0xc0018d03c0) (3) Data frame handling
I0707 11:15:48.202199       6 log.go:172] (0xc001e72420) Data frame received for 5
I0707 11:15:48.202206       6 log.go:172] (0xc0021a8280) (5) Data frame handling
I0707 11:15:48.203098       6 log.go:172] (0xc001e72420) Data frame received for 1
I0707 11:15:48.203113       6 log.go:172] (0xc00041d220) (1) Data frame handling
I0707 11:15:48.203131       6 log.go:172] (0xc00041d220) (1) Data frame sent
I0707 11:15:48.203150       6 log.go:172] (0xc001e72420) (0xc00041d220) Stream removed, broadcasting: 1
I0707 11:15:48.203171       6 log.go:172] (0xc001e72420) Go away received
I0707 11:15:48.203335       6 log.go:172] (0xc001e72420) (0xc00041d220) Stream removed, broadcasting: 1
I0707 11:15:48.203359       6 log.go:172] (0xc001e72420) (0xc0018d03c0) Stream removed, broadcasting: 3
I0707 11:15:48.203369       6 log.go:172] (0xc001e72420) (0xc0021a8280) Stream removed, broadcasting: 5
Jul  7 11:15:48.203: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jul  7 11:15:48.203: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-cxcg7 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 11:15:48.203: INFO: >>> kubeConfig: /root/.kube/config
I0707 11:15:48.224030       6 log.go:172] (0xc0008de420) (0xc00174aa00) Create stream
I0707 11:15:48.224049       6 log.go:172] (0xc0008de420) (0xc00174aa00) Stream added, broadcasting: 1
I0707 11:15:48.226315       6 log.go:172] (0xc0008de420) Reply frame received for 1
I0707 11:15:48.226340       6 log.go:172] (0xc0008de420) (0xc0018d0460) Create stream
I0707 11:15:48.226348       6 log.go:172] (0xc0008de420) (0xc0018d0460) Stream added, broadcasting: 3
I0707 11:15:48.227054       6 log.go:172] (0xc0008de420) Reply frame received for 3
I0707 11:15:48.227078       6 log.go:172] (0xc0008de420) (0xc00041d360) Create stream
I0707 11:15:48.227086       6 log.go:172] (0xc0008de420) (0xc00041d360) Stream added, broadcasting: 5
I0707 11:15:48.227714       6 log.go:172] (0xc0008de420) Reply frame received for 5
I0707 11:15:48.280109       6 log.go:172] (0xc0008de420) Data frame received for 5
I0707 11:15:48.280168       6 log.go:172] (0xc00041d360) (5) Data frame handling
I0707 11:15:48.280208       6 log.go:172] (0xc0008de420) Data frame received for 3
I0707 11:15:48.280233       6 log.go:172] (0xc0018d0460) (3) Data frame handling
I0707 11:15:48.280268       6 log.go:172] (0xc0018d0460) (3) Data frame sent
I0707 11:15:48.280290       6 log.go:172] (0xc0008de420) Data frame received for 3
I0707 11:15:48.280308       6 log.go:172] (0xc0018d0460) (3) Data frame handling
I0707 11:15:48.281914       6 log.go:172] (0xc0008de420) Data frame received for 1
I0707 11:15:48.281949       6 log.go:172] (0xc00174aa00) (1) Data frame handling
I0707 11:15:48.281978       6 log.go:172] (0xc00174aa00) (1) Data frame sent
I0707 11:15:48.282000       6 log.go:172] (0xc0008de420) (0xc00174aa00) Stream removed, broadcasting: 1
I0707 11:15:48.282114       6 log.go:172] (0xc0008de420) (0xc00174aa00) Stream removed, broadcasting: 1
I0707 11:15:48.282143       6 log.go:172] (0xc0008de420) (0xc0018d0460) Stream removed, broadcasting: 3
I0707 11:15:48.282156       6 log.go:172] (0xc0008de420) (0xc00041d360) Stream removed, broadcasting: 5
Jul  7 11:15:48.282: INFO: Exec stderr: ""
Jul  7 11:15:48.282: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-cxcg7 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 11:15:48.282: INFO: >>> kubeConfig: /root/.kube/config
I0707 11:15:48.283977       6 log.go:172] (0xc0008de420) Go away received
I0707 11:15:48.312432       6 log.go:172] (0xc0008de790) (0xc00174ab40) Create stream
I0707 11:15:48.312463       6 log.go:172] (0xc0008de790) (0xc00174ab40) Stream added, broadcasting: 1
I0707 11:15:48.315176       6 log.go:172] (0xc0008de790) Reply frame received for 1
I0707 11:15:48.315221       6 log.go:172] (0xc0008de790) (0xc0018d0640) Create stream
I0707 11:15:48.315238       6 log.go:172] (0xc0008de790) (0xc0018d0640) Stream added, broadcasting: 3
I0707 11:15:48.316401       6 log.go:172] (0xc0008de790) Reply frame received for 3
I0707 11:15:48.316449       6 log.go:172] (0xc0008de790) (0xc0021a8320) Create stream
I0707 11:15:48.316467       6 log.go:172] (0xc0008de790) (0xc0021a8320) Stream added, broadcasting: 5
I0707 11:15:48.318016       6 log.go:172] (0xc0008de790) Reply frame received for 5
I0707 11:15:48.394681       6 log.go:172] (0xc0008de790) Data frame received for 5
I0707 11:15:48.394722       6 log.go:172] (0xc0021a8320) (5) Data frame handling
I0707 11:15:48.394748       6 log.go:172] (0xc0008de790) Data frame received for 3
I0707 11:15:48.394762       6 log.go:172] (0xc0018d0640) (3) Data frame handling
I0707 11:15:48.394777       6 log.go:172] (0xc0018d0640) (3) Data frame sent
I0707 11:15:48.394791       6 log.go:172] (0xc0008de790) Data frame received for 3
I0707 11:15:48.394803       6 log.go:172] (0xc0018d0640) (3) Data frame handling
I0707 11:15:48.395945       6 log.go:172] (0xc0008de790) Data frame received for 1
I0707 11:15:48.395986       6 log.go:172] (0xc00174ab40) (1) Data frame handling
I0707 11:15:48.396018       6 log.go:172] (0xc00174ab40) (1) Data frame sent
I0707 11:15:48.396053       6 log.go:172] (0xc0008de790) (0xc00174ab40) Stream removed, broadcasting: 1
I0707 11:15:48.396093       6 log.go:172] (0xc0008de790) Go away received
I0707 11:15:48.396293       6 log.go:172] (0xc0008de790) (0xc00174ab40) Stream removed, broadcasting: 1
I0707 11:15:48.396332       6 log.go:172] (0xc0008de790) (0xc0018d0640) Stream removed, broadcasting: 3
I0707 11:15:48.396358       6 log.go:172] (0xc0008de790) (0xc0021a8320) Stream removed, broadcasting: 5
Jul  7 11:15:48.396: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jul  7 11:15:48.396: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-cxcg7 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 11:15:48.396: INFO: >>> kubeConfig: /root/.kube/config
I0707 11:15:48.434536       6 log.go:172] (0xc0009a02c0) (0xc0021a85a0) Create stream
I0707 11:15:48.434571       6 log.go:172] (0xc0009a02c0) (0xc0021a85a0) Stream added, broadcasting: 1
I0707 11:15:48.443196       6 log.go:172] (0xc0009a02c0) Reply frame received for 1
I0707 11:15:48.443336       6 log.go:172] (0xc0009a02c0) (0xc000f26000) Create stream
I0707 11:15:48.443437       6 log.go:172] (0xc0009a02c0) (0xc000f26000) Stream added, broadcasting: 3
I0707 11:15:48.447520       6 log.go:172] (0xc0009a02c0) Reply frame received for 3
I0707 11:15:48.447574       6 log.go:172] (0xc0009a02c0) (0xc00041d4a0) Create stream
I0707 11:15:48.447584       6 log.go:172] (0xc0009a02c0) (0xc00041d4a0) Stream added, broadcasting: 5
I0707 11:15:48.448416       6 log.go:172] (0xc0009a02c0) Reply frame received for 5
I0707 11:15:48.518127       6 log.go:172] (0xc0009a02c0) Data frame received for 5
I0707 11:15:48.518172       6 log.go:172] (0xc0009a02c0) Data frame received for 3
I0707 11:15:48.518215       6 log.go:172] (0xc000f26000) (3) Data frame handling
I0707 11:15:48.518240       6 log.go:172] (0xc000f26000) (3) Data frame sent
I0707 11:15:48.518258       6 log.go:172] (0xc0009a02c0) Data frame received for 3
I0707 11:15:48.518274       6 log.go:172] (0xc000f26000) (3) Data frame handling
I0707 11:15:48.518325       6 log.go:172] (0xc00041d4a0) (5) Data frame handling
I0707 11:15:48.519700       6 log.go:172] (0xc0009a02c0) Data frame received for 1
I0707 11:15:48.519722       6 log.go:172] (0xc0021a85a0) (1) Data frame handling
I0707 11:15:48.519743       6 log.go:172] (0xc0021a85a0) (1) Data frame sent
I0707 11:15:48.519760       6 log.go:172] (0xc0009a02c0) (0xc0021a85a0) Stream removed, broadcasting: 1
I0707 11:15:48.519821       6 log.go:172] (0xc0009a02c0) (0xc0021a85a0) Stream removed, broadcasting: 1
I0707 11:15:48.519834       6 log.go:172] (0xc0009a02c0) (0xc000f26000) Stream removed, broadcasting: 3
I0707 11:15:48.519900       6 log.go:172] (0xc0009a02c0) Go away received
I0707 11:15:48.519934       6 log.go:172] (0xc0009a02c0) (0xc00041d4a0) Stream removed, broadcasting: 5
Jul  7 11:15:48.519: INFO: Exec stderr: ""
Jul  7 11:15:48.520: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-cxcg7 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 11:15:48.520: INFO: >>> kubeConfig: /root/.kube/config
I0707 11:15:48.555471       6 log.go:172] (0xc000947d90) (0xc0018ec280) Create stream
I0707 11:15:48.555492       6 log.go:172] (0xc000947d90) (0xc0018ec280) Stream added, broadcasting: 1
I0707 11:15:48.557724       6 log.go:172] (0xc000947d90) Reply frame received for 1
I0707 11:15:48.557768       6 log.go:172] (0xc000947d90) (0xc0018ec3c0) Create stream
I0707 11:15:48.557782       6 log.go:172] (0xc000947d90) (0xc0018ec3c0) Stream added, broadcasting: 3
I0707 11:15:48.559045       6 log.go:172] (0xc000947d90) Reply frame received for 3
I0707 11:15:48.559082       6 log.go:172] (0xc000947d90) (0xc0018ec460) Create stream
I0707 11:15:48.559095       6 log.go:172] (0xc000947d90) (0xc0018ec460) Stream added, broadcasting: 5
I0707 11:15:48.560039       6 log.go:172] (0xc000947d90) Reply frame received for 5
I0707 11:15:48.614972       6 log.go:172] (0xc000947d90) Data frame received for 5
I0707 11:15:48.615015       6 log.go:172] (0xc0018ec460) (5) Data frame handling
I0707 11:15:48.615048       6 log.go:172] (0xc000947d90) Data frame received for 3
I0707 11:15:48.615067       6 log.go:172] (0xc0018ec3c0) (3) Data frame handling
I0707 11:15:48.615087       6 log.go:172] (0xc0018ec3c0) (3) Data frame sent
I0707 11:15:48.615102       6 log.go:172] (0xc000947d90) Data frame received for 3
I0707 11:15:48.615114       6 log.go:172] (0xc0018ec3c0) (3) Data frame handling
I0707 11:15:48.616604       6 log.go:172] (0xc000947d90) Data frame received for 1
I0707 11:15:48.616654       6 log.go:172] (0xc0018ec280) (1) Data frame handling
I0707 11:15:48.616693       6 log.go:172] (0xc0018ec280) (1) Data frame sent
I0707 11:15:48.616726       6 log.go:172] (0xc000947d90) (0xc0018ec280) Stream removed, broadcasting: 1
I0707 11:15:48.616759       6 log.go:172] (0xc000947d90) Go away received
I0707 11:15:48.616995       6 log.go:172] (0xc000947d90) (0xc0018ec280) Stream removed, broadcasting: 1
I0707 11:15:48.617051       6 log.go:172] (0xc000947d90) (0xc0018ec3c0) Stream removed, broadcasting: 3
I0707 11:15:48.617076       6 log.go:172] (0xc000947d90) (0xc0018ec460) Stream removed, broadcasting: 5
Jul  7 11:15:48.617: INFO: Exec stderr: ""
Jul  7 11:15:48.617: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-cxcg7 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 11:15:48.617: INFO: >>> kubeConfig: /root/.kube/config
I0707 11:15:48.651363       6 log.go:172] (0xc0009a0580) (0xc0014fe280) Create stream
I0707 11:15:48.651395       6 log.go:172] (0xc0009a0580) (0xc0014fe280) Stream added, broadcasting: 1
I0707 11:15:48.653980       6 log.go:172] (0xc0009a0580) Reply frame received for 1
I0707 11:15:48.654025       6 log.go:172] (0xc0009a0580) (0xc0018ec500) Create stream
I0707 11:15:48.654041       6 log.go:172] (0xc0009a0580) (0xc0018ec500) Stream added, broadcasting: 3
I0707 11:15:48.654989       6 log.go:172] (0xc0009a0580) Reply frame received for 3
I0707 11:15:48.655039       6 log.go:172] (0xc0009a0580) (0xc000d2e0a0) Create stream
I0707 11:15:48.655055       6 log.go:172] (0xc0009a0580) (0xc000d2e0a0) Stream added, broadcasting: 5
I0707 11:15:48.655924       6 log.go:172] (0xc0009a0580) Reply frame received for 5
I0707 11:15:48.718181       6 log.go:172] (0xc0009a0580) Data frame received for 5
I0707 11:15:48.718223       6 log.go:172] (0xc000d2e0a0) (5) Data frame handling
I0707 11:15:48.718258       6 log.go:172] (0xc0009a0580) Data frame received for 3
I0707 11:15:48.718289       6 log.go:172] (0xc0018ec500) (3) Data frame handling
I0707 11:15:48.718315       6 log.go:172] (0xc0018ec500) (3) Data frame sent
I0707 11:15:48.718331       6 log.go:172] (0xc0009a0580) Data frame received for 3
I0707 11:15:48.718343       6 log.go:172] (0xc0018ec500) (3) Data frame handling
I0707 11:15:48.720121       6 log.go:172] (0xc0009a0580) Data frame received for 1
I0707 11:15:48.720145       6 log.go:172] (0xc0014fe280) (1) Data frame handling
I0707 11:15:48.720162       6 log.go:172] (0xc0014fe280) (1) Data frame sent
I0707 11:15:48.720176       6 log.go:172] (0xc0009a0580) (0xc0014fe280) Stream removed, broadcasting: 1
I0707 11:15:48.720193       6 log.go:172] (0xc0009a0580) Go away received
I0707 11:15:48.720311       6 log.go:172] (0xc0009a0580) (0xc0014fe280) Stream removed, broadcasting: 1
I0707 11:15:48.720347       6 log.go:172] (0xc0009a0580) (0xc0018ec500) Stream removed, broadcasting: 3
I0707 11:15:48.720371       6 log.go:172] (0xc0009a0580) (0xc000d2e0a0) Stream removed, broadcasting: 5
Jul  7 11:15:48.720: INFO: Exec stderr: ""
Jul  7 11:15:48.720: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-cxcg7 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 11:15:48.720: INFO: >>> kubeConfig: /root/.kube/config
I0707 11:15:48.759033       6 log.go:172] (0xc0009a0a50) (0xc0014fe500) Create stream
I0707 11:15:48.759073       6 log.go:172] (0xc0009a0a50) (0xc0014fe500) Stream added, broadcasting: 1
I0707 11:15:48.761040       6 log.go:172] (0xc0009a0a50) Reply frame received for 1
I0707 11:15:48.761091       6 log.go:172] (0xc0009a0a50) (0xc001a1e0a0) Create stream
I0707 11:15:48.761254       6 log.go:172] (0xc0009a0a50) (0xc001a1e0a0) Stream added, broadcasting: 3
I0707 11:15:48.762196       6 log.go:172] (0xc0009a0a50) Reply frame received for 3
I0707 11:15:48.762255       6 log.go:172] (0xc0009a0a50) (0xc0018ec5a0) Create stream
I0707 11:15:48.762273       6 log.go:172] (0xc0009a0a50) (0xc0018ec5a0) Stream added, broadcasting: 5
I0707 11:15:48.763156       6 log.go:172] (0xc0009a0a50) Reply frame received for 5
I0707 11:15:48.829933       6 log.go:172] (0xc0009a0a50) Data frame received for 3
I0707 11:15:48.829964       6 log.go:172] (0xc001a1e0a0) (3) Data frame handling
I0707 11:15:48.829984       6 log.go:172] (0xc001a1e0a0) (3) Data frame sent
I0707 11:15:48.829996       6 log.go:172] (0xc0009a0a50) Data frame received for 3
I0707 11:15:48.830006       6 log.go:172] (0xc001a1e0a0) (3) Data frame handling
I0707 11:15:48.830066       6 log.go:172] (0xc0009a0a50) Data frame received for 5
I0707 11:15:48.830101       6 log.go:172] (0xc0018ec5a0) (5) Data frame handling
I0707 11:15:48.831383       6 log.go:172] (0xc0009a0a50) Data frame received for 1
I0707 11:15:48.831411       6 log.go:172] (0xc0014fe500) (1) Data frame handling
I0707 11:15:48.831432       6 log.go:172] (0xc0014fe500) (1) Data frame sent
I0707 11:15:48.831448       6 log.go:172] (0xc0009a0a50) (0xc0014fe500) Stream removed, broadcasting: 1
I0707 11:15:48.831461       6 log.go:172] (0xc0009a0a50) Go away received
I0707 11:15:48.831610       6 log.go:172] (0xc0009a0a50) (0xc0014fe500) Stream removed, broadcasting: 1
I0707 11:15:48.831634       6 log.go:172] (0xc0009a0a50) (0xc001a1e0a0) Stream removed, broadcasting: 3
I0707 11:15:48.831647       6 log.go:172] (0xc0009a0a50) (0xc0018ec5a0) Stream removed, broadcasting: 5
Jul  7 11:15:48.831: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:15:48.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-cxcg7" for this suite.
Jul  7 11:16:28.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:16:28.901: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-cxcg7, resource: bindings, ignored listing per whitelist
Jul  7 11:16:28.929: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-cxcg7 deletion completed in 40.093432801s

• [SLOW TEST:55.690 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:16:28.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-nhzls
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-nhzls
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-nhzls
Jul  7 11:16:29.062: INFO: Found 0 stateful pods, waiting for 1
Jul  7 11:16:39.067: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jul  7 11:16:39.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nhzls ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  7 11:16:39.356: INFO: stderr: "I0707 11:16:39.210215     366 log.go:172] (0xc000138790) (0xc0004ff360) Create stream\nI0707 11:16:39.210292     366 log.go:172] (0xc000138790) (0xc0004ff360) Stream added, broadcasting: 1\nI0707 11:16:39.212217     366 log.go:172] (0xc000138790) Reply frame received for 1\nI0707 11:16:39.212264     366 log.go:172] (0xc000138790) (0xc000644000) Create stream\nI0707 11:16:39.212275     366 log.go:172] (0xc000138790) (0xc000644000) Stream added, broadcasting: 3\nI0707 11:16:39.213023     366 log.go:172] (0xc000138790) Reply frame received for 3\nI0707 11:16:39.213047     366 log.go:172] (0xc000138790) (0xc0004ff400) Create stream\nI0707 11:16:39.213054     366 log.go:172] (0xc000138790) (0xc0004ff400) Stream added, broadcasting: 5\nI0707 11:16:39.214055     366 log.go:172] (0xc000138790) Reply frame received for 5\nI0707 11:16:39.347836     366 log.go:172] (0xc000138790) Data frame received for 3\nI0707 11:16:39.347891     366 log.go:172] (0xc000644000) (3) Data frame handling\nI0707 11:16:39.347982     366 log.go:172] (0xc000644000) (3) Data frame sent\nI0707 11:16:39.348130     366 log.go:172] (0xc000138790) Data frame received for 3\nI0707 11:16:39.348155     366 log.go:172] (0xc000644000) (3) Data frame handling\nI0707 11:16:39.348187     366 log.go:172] (0xc000138790) Data frame received for 5\nI0707 11:16:39.348224     366 log.go:172] (0xc0004ff400) (5) Data frame handling\nI0707 11:16:39.350525     366 log.go:172] (0xc000138790) Data frame received for 1\nI0707 11:16:39.350559     366 log.go:172] (0xc0004ff360) (1) Data frame handling\nI0707 11:16:39.350578     366 log.go:172] (0xc0004ff360) (1) Data frame sent\nI0707 11:16:39.350599     366 log.go:172] (0xc000138790) (0xc0004ff360) Stream removed, broadcasting: 1\nI0707 11:16:39.350847     366 log.go:172] (0xc000138790) (0xc0004ff360) Stream removed, broadcasting: 1\nI0707 11:16:39.350882     366 log.go:172] (0xc000138790) (0xc000644000) Stream removed, broadcasting: 3\nI0707 11:16:39.351186     366 log.go:172] (0xc000138790) (0xc0004ff400) Stream removed, broadcasting: 5\nI0707 11:16:39.351225     366 log.go:172] (0xc000138790) Go away received\n"
Jul  7 11:16:39.356: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  7 11:16:39.356: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  7 11:16:39.360: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul  7 11:16:49.432: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  7 11:16:49.432: INFO: Waiting for statefulset status.replicas updated to 0
Jul  7 11:16:49.451: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999613s
Jul  7 11:16:50.456: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.991175868s
Jul  7 11:16:51.460: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.986566964s
Jul  7 11:16:52.708: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.982291327s
Jul  7 11:16:53.712: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.734783562s
Jul  7 11:16:54.717: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.729752702s
Jul  7 11:16:55.722: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.725403573s
Jul  7 11:16:56.728: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.720246464s
Jul  7 11:16:57.733: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.714728562s
Jul  7 11:16:58.738: INFO: Verifying statefulset ss doesn't scale past 1 for another 709.529346ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-nhzls
Jul  7 11:16:59.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nhzls ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:16:59.967: INFO: stderr: "I0707 11:16:59.894085     388 log.go:172] (0xc000218370) (0xc0006f4640) Create stream\nI0707 11:16:59.894173     388 log.go:172] (0xc000218370) (0xc0006f4640) Stream added, broadcasting: 1\nI0707 11:16:59.896500     388 log.go:172] (0xc000218370) Reply frame received for 1\nI0707 11:16:59.896554     388 log.go:172] (0xc000218370) (0xc00041cc80) Create stream\nI0707 11:16:59.896572     388 log.go:172] (0xc000218370) (0xc00041cc80) Stream added, broadcasting: 3\nI0707 11:16:59.897867     388 log.go:172] (0xc000218370) Reply frame received for 3\nI0707 11:16:59.897904     388 log.go:172] (0xc000218370) (0xc00041cdc0) Create stream\nI0707 11:16:59.897916     388 log.go:172] (0xc000218370) (0xc00041cdc0) Stream added, broadcasting: 5\nI0707 11:16:59.898840     388 log.go:172] (0xc000218370) Reply frame received for 5\nI0707 11:16:59.962644     388 log.go:172] (0xc000218370) Data frame received for 3\nI0707 11:16:59.962680     388 log.go:172] (0xc00041cc80) (3) Data frame handling\nI0707 11:16:59.962689     388 log.go:172] (0xc00041cc80) (3) Data frame sent\nI0707 11:16:59.962697     388 log.go:172] (0xc000218370) Data frame received for 3\nI0707 11:16:59.962706     388 log.go:172] (0xc00041cc80) (3) Data frame handling\nI0707 11:16:59.962739     388 log.go:172] (0xc000218370) Data frame received for 5\nI0707 11:16:59.962745     388 log.go:172] (0xc00041cdc0) (5) Data frame handling\nI0707 11:16:59.963845     388 log.go:172] (0xc000218370) Data frame received for 1\nI0707 11:16:59.963856     388 log.go:172] (0xc0006f4640) (1) Data frame handling\nI0707 11:16:59.963870     388 log.go:172] (0xc0006f4640) (1) Data frame sent\nI0707 11:16:59.963946     388 log.go:172] (0xc000218370) (0xc0006f4640) Stream removed, broadcasting: 1\nI0707 11:16:59.964044     388 log.go:172] (0xc000218370) Go away received\nI0707 11:16:59.964122     388 log.go:172] (0xc000218370) (0xc0006f4640) Stream removed, broadcasting: 1\nI0707 11:16:59.964135     388 log.go:172] (0xc000218370) (0xc00041cc80) Stream removed, broadcasting: 3\nI0707 11:16:59.964144     388 log.go:172] (0xc000218370) (0xc00041cdc0) Stream removed, broadcasting: 5\n"
Jul  7 11:16:59.967: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  7 11:16:59.967: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  7 11:16:59.971: INFO: Found 1 stateful pods, waiting for 3
Jul  7 11:17:09.976: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 11:17:09.976: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 11:17:09.976: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul  7 11:17:19.976: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 11:17:19.976: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 11:17:19.976: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jul  7 11:17:19.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nhzls ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  7 11:17:20.208: INFO: stderr: "I0707 11:17:20.116843     410 log.go:172] (0xc00016a840) (0xc000744640) Create stream\nI0707 11:17:20.116907     410 log.go:172] (0xc00016a840) (0xc000744640) Stream added, broadcasting: 1\nI0707 11:17:20.119452     410 log.go:172] (0xc00016a840) Reply frame received for 1\nI0707 11:17:20.119497     410 log.go:172] (0xc00016a840) (0xc0005aad20) Create stream\nI0707 11:17:20.119510     410 log.go:172] (0xc00016a840) (0xc0005aad20) Stream added, broadcasting: 3\nI0707 11:17:20.120551     410 log.go:172] (0xc00016a840) Reply frame received for 3\nI0707 11:17:20.120622     410 log.go:172] (0xc00016a840) (0xc0006d4000) Create stream\nI0707 11:17:20.120644     410 log.go:172] (0xc00016a840) (0xc0006d4000) Stream added, broadcasting: 5\nI0707 11:17:20.121859     410 log.go:172] (0xc00016a840) Reply frame received for 5\nI0707 11:17:20.197873     410 log.go:172] (0xc00016a840) Data frame received for 5\nI0707 11:17:20.197912     410 log.go:172] (0xc0006d4000) (5) Data frame handling\nI0707 11:17:20.197933     410 log.go:172] (0xc00016a840) Data frame received for 3\nI0707 11:17:20.197940     410 log.go:172] (0xc0005aad20) (3) Data frame handling\nI0707 11:17:20.197949     410 log.go:172] (0xc0005aad20) (3) Data frame sent\nI0707 11:17:20.197956     410 log.go:172] (0xc00016a840) Data frame received for 3\nI0707 11:17:20.197966     410 log.go:172] (0xc0005aad20) (3) Data frame handling\nI0707 11:17:20.199646     410 log.go:172] (0xc00016a840) Data frame received for 1\nI0707 11:17:20.199673     410 log.go:172] (0xc000744640) (1) Data frame handling\nI0707 11:17:20.199698     410 log.go:172] (0xc000744640) (1) Data frame sent\nI0707 11:17:20.199740     410 log.go:172] (0xc00016a840) (0xc000744640) Stream removed, broadcasting: 1\nI0707 11:17:20.199791     410 log.go:172] (0xc00016a840) Go away received\nI0707 11:17:20.200004     410 log.go:172] (0xc00016a840) (0xc000744640) Stream removed, broadcasting: 1\nI0707 11:17:20.200033     410 log.go:172] (0xc00016a840) (0xc0005aad20) Stream removed, broadcasting: 3\nI0707 11:17:20.200055     410 log.go:172] (0xc00016a840) (0xc0006d4000) Stream removed, broadcasting: 5\n"
Jul  7 11:17:20.208: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  7 11:17:20.208: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  7 11:17:20.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nhzls ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  7 11:17:20.439: INFO: stderr: "I0707 11:17:20.333439     433 log.go:172] (0xc00082e2c0) (0xc000441360) Create stream\nI0707 11:17:20.333507     433 log.go:172] (0xc00082e2c0) (0xc000441360) Stream added, broadcasting: 1\nI0707 11:17:20.335963     433 log.go:172] (0xc00082e2c0) Reply frame received for 1\nI0707 11:17:20.336013     433 log.go:172] (0xc00082e2c0) (0xc000441400) Create stream\nI0707 11:17:20.336029     433 log.go:172] (0xc00082e2c0) (0xc000441400) Stream added, broadcasting: 3\nI0707 11:17:20.337010     433 log.go:172] (0xc00082e2c0) Reply frame received for 3\nI0707 11:17:20.337061     433 log.go:172] (0xc00082e2c0) (0xc000456000) Create stream\nI0707 11:17:20.337078     433 log.go:172] (0xc00082e2c0) (0xc000456000) Stream added, broadcasting: 5\nI0707 11:17:20.338206     433 log.go:172] (0xc00082e2c0) Reply frame received for 5\nI0707 11:17:20.433079     433 log.go:172] (0xc00082e2c0) Data frame received for 3\nI0707 11:17:20.433108     433 log.go:172] (0xc000441400) (3) Data frame handling\nI0707 11:17:20.433294     433 log.go:172] (0xc000441400) (3) Data frame sent\nI0707 11:17:20.433305     433 log.go:172] (0xc00082e2c0) Data frame received for 3\nI0707 11:17:20.433313     433 log.go:172] (0xc000441400) (3) Data frame handling\nI0707 11:17:20.433448     433 log.go:172] (0xc00082e2c0) Data frame received for 5\nI0707 11:17:20.433482     433 log.go:172] (0xc000456000) (5) Data frame handling\nI0707 11:17:20.435398     433 log.go:172] (0xc00082e2c0) Data frame received for 1\nI0707 11:17:20.435417     433 log.go:172] (0xc000441360) (1) Data frame handling\nI0707 11:17:20.435431     433 log.go:172] (0xc000441360) (1) Data frame sent\nI0707 11:17:20.435456     433 log.go:172] (0xc00082e2c0) (0xc000441360) Stream removed, broadcasting: 1\nI0707 11:17:20.435646     433 log.go:172] (0xc00082e2c0) Go away received\nI0707 11:17:20.435683     433 log.go:172] (0xc00082e2c0) (0xc000441360) Stream removed, broadcasting: 1\nI0707 11:17:20.435702     433 log.go:172] (0xc00082e2c0) (0xc000441400) Stream removed, broadcasting: 3\nI0707 11:17:20.435712     433 log.go:172] (0xc00082e2c0) (0xc000456000) Stream removed, broadcasting: 5\n"
Jul  7 11:17:20.440: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  7 11:17:20.440: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  7 11:17:20.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nhzls ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  7 11:17:20.747: INFO: stderr: "I0707 11:17:20.624999     455 log.go:172] (0xc000832160) (0xc000718000) Create stream\nI0707 11:17:20.625062     455 log.go:172] (0xc000832160) (0xc000718000) Stream added, broadcasting: 1\nI0707 11:17:20.627955     455 log.go:172] (0xc000832160) Reply frame received for 1\nI0707 11:17:20.628002     455 log.go:172] (0xc000832160) (0xc0004f0c80) Create stream\nI0707 11:17:20.628016     455 log.go:172] (0xc000832160) (0xc0004f0c80) Stream added, broadcasting: 3\nI0707 11:17:20.629279     455 log.go:172] (0xc000832160) Reply frame received for 3\nI0707 11:17:20.629320     455 log.go:172] (0xc000832160) (0xc000718140) Create stream\nI0707 11:17:20.629337     455 log.go:172] (0xc000832160) (0xc000718140) Stream added, broadcasting: 5\nI0707 11:17:20.630240     455 log.go:172] (0xc000832160) Reply frame received for 5\nI0707 11:17:20.739801     455 log.go:172] (0xc000832160) Data frame received for 3\nI0707 11:17:20.739848     455 log.go:172] (0xc0004f0c80) (3) Data frame handling\nI0707 11:17:20.739886     455 log.go:172] (0xc0004f0c80) (3) Data frame sent\nI0707 11:17:20.739906     455 log.go:172] (0xc000832160) Data frame received for 3\nI0707 11:17:20.739925     455 log.go:172] (0xc0004f0c80) (3) Data frame handling\nI0707 11:17:20.739948     455 log.go:172] (0xc000832160) Data frame received for 5\nI0707 11:17:20.739959     455 log.go:172] (0xc000718140) (5) Data frame handling\nI0707 11:17:20.742504     455 log.go:172] (0xc000832160) Data frame received for 1\nI0707 11:17:20.742552     455 log.go:172] (0xc000718000) (1) Data frame handling\nI0707 11:17:20.742576     455 log.go:172] (0xc000718000) (1) Data frame sent\nI0707 11:17:20.742627     455 log.go:172] (0xc000832160) (0xc000718000) Stream removed, broadcasting: 1\nI0707 11:17:20.742655     455 log.go:172] (0xc000832160) Go away received\nI0707 11:17:20.742992     455 log.go:172] (0xc000832160) (0xc000718000) Stream removed, broadcasting: 1\nI0707 11:17:20.743027     455 log.go:172] (0xc000832160) (0xc0004f0c80) Stream removed, broadcasting: 3\nI0707 11:17:20.743047     455 log.go:172] (0xc000832160) (0xc000718140) Stream removed, broadcasting: 5\n"
Jul  7 11:17:20.748: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  7 11:17:20.748: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  7 11:17:20.748: INFO: Waiting for statefulset status.replicas updated to 0
Jul  7 11:17:20.751: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Jul  7 11:17:30.757: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  7 11:17:30.757: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul  7 11:17:30.757: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul  7 11:17:30.770: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999342s
Jul  7 11:17:31.776: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992975494s
Jul  7 11:17:32.867: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987531721s
Jul  7 11:17:33.873: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.896055134s
Jul  7 11:17:34.879: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.890216956s
Jul  7 11:17:35.883: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.88481342s
Jul  7 11:17:36.888: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.880231155s
Jul  7 11:17:37.893: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.875342612s
Jul  7 11:17:38.898: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.869916521s
Jul  7 11:17:39.941: INFO: Verifying statefulset ss doesn't scale past 3 for another 865.347682ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-nhzls
Jul  7 11:17:40.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nhzls ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:17:41.150: INFO: stderr: "I0707 11:17:41.080885     477 log.go:172] (0xc0007ce420) (0xc0005e9400) Create stream\nI0707 11:17:41.080937     477 log.go:172] (0xc0007ce420) (0xc0005e9400) Stream added, broadcasting: 1\nI0707 11:17:41.082766     477 log.go:172] (0xc0007ce420) Reply frame received for 1\nI0707 11:17:41.082891     477 log.go:172] (0xc0007ce420) (0xc0001a2000) Create stream\nI0707 11:17:41.082902     477 log.go:172] (0xc0007ce420) (0xc0001a2000) Stream added, broadcasting: 3\nI0707 11:17:41.083663     477 log.go:172] (0xc0007ce420) Reply frame received for 3\nI0707 11:17:41.083727     477 log.go:172] (0xc0007ce420) (0xc0001a6000) Create stream\nI0707 11:17:41.083739     477 log.go:172] (0xc0007ce420) (0xc0001a6000) Stream added, broadcasting: 5\nI0707 11:17:41.084464     477 log.go:172] (0xc0007ce420) Reply frame received for 5\nI0707 11:17:41.144328     477 log.go:172] (0xc0007ce420) Data frame received for 5\nI0707 11:17:41.144380     477 log.go:172] (0xc0001a6000) (5) Data frame handling\nI0707 11:17:41.144421     477 log.go:172] (0xc0007ce420) Data frame received for 3\nI0707 11:17:41.144447     477 log.go:172] (0xc0001a2000) (3) Data frame handling\nI0707 11:17:41.144479     477 log.go:172] (0xc0001a2000) (3) Data frame sent\nI0707 11:17:41.144505     477 log.go:172] (0xc0007ce420) Data frame received for 3\nI0707 11:17:41.144528     477 log.go:172] (0xc0001a2000) (3) Data frame handling\nI0707 11:17:41.145959     477 log.go:172] (0xc0007ce420) Data frame received for 1\nI0707 11:17:41.145969     477 log.go:172] (0xc0005e9400) (1) Data frame handling\nI0707 11:17:41.145975     477 log.go:172] (0xc0005e9400) (1) Data frame sent\nI0707 11:17:41.145981     477 log.go:172] (0xc0007ce420) (0xc0005e9400) Stream removed, broadcasting: 1\nI0707 11:17:41.146025     477 log.go:172] (0xc0007ce420) Go away received\nI0707 11:17:41.146081     477 log.go:172] (0xc0007ce420) (0xc0005e9400) Stream removed, broadcasting: 1\nI0707 11:17:41.146091     477 log.go:172] (0xc0007ce420) (0xc0001a2000) Stream removed, broadcasting: 3\nI0707 11:17:41.146096     477 log.go:172] (0xc0007ce420) (0xc0001a6000) Stream removed, broadcasting: 5\n"
Jul  7 11:17:41.150: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  7 11:17:41.150: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  7 11:17:41.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nhzls ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:17:41.335: INFO: stderr: "I0707 11:17:41.282406     498 log.go:172] (0xc000138840) (0xc000776640) Create stream\nI0707 11:17:41.282454     498 log.go:172] (0xc000138840) (0xc000776640) Stream added, broadcasting: 1\nI0707 11:17:41.284450     498 log.go:172] (0xc000138840) Reply frame received for 1\nI0707 11:17:41.284496     498 log.go:172] (0xc000138840) (0xc0007766e0) Create stream\nI0707 11:17:41.284510     498 log.go:172] (0xc000138840) (0xc0007766e0) Stream added, broadcasting: 3\nI0707 11:17:41.285442     498 log.go:172] (0xc000138840) Reply frame received for 3\nI0707 11:17:41.285471     498 log.go:172] (0xc000138840) (0xc000776780) Create stream\nI0707 11:17:41.285479     498 log.go:172] (0xc000138840) (0xc000776780) Stream added, broadcasting: 5\nI0707 11:17:41.286287     498 log.go:172] (0xc000138840) Reply frame received for 5\nI0707 11:17:41.329846     498 log.go:172] (0xc000138840) Data frame received for 5\nI0707 11:17:41.329893     498 log.go:172] (0xc000776780) (5) Data frame handling\nI0707 11:17:41.329945     498 log.go:172] (0xc000138840) Data frame received for 3\nI0707 11:17:41.329987     498 log.go:172] (0xc0007766e0) (3) Data frame handling\nI0707 11:17:41.330005     498 log.go:172] (0xc0007766e0) (3) Data frame sent\nI0707 11:17:41.330014     498 log.go:172] (0xc000138840) Data frame received for 3\nI0707 11:17:41.330021     498 log.go:172] (0xc0007766e0) (3) Data frame handling\nI0707 11:17:41.331580     498 log.go:172] (0xc000138840) Data frame received for 1\nI0707 11:17:41.331619     498 log.go:172] (0xc000776640) (1) Data frame handling\nI0707 11:17:41.331649     498 log.go:172] (0xc000776640) (1) Data frame sent\nI0707 11:17:41.331672     498 log.go:172] (0xc000138840) (0xc000776640) Stream removed, broadcasting: 1\nI0707 11:17:41.331712     498 log.go:172] (0xc000138840) Go away received\nI0707 11:17:41.332002     498 log.go:172] (0xc000138840) (0xc000776640) Stream removed, broadcasting: 1\nI0707 11:17:41.332034     498 log.go:172] (0xc000138840) (0xc0007766e0) Stream removed, broadcasting: 3\nI0707 11:17:41.332052     498 log.go:172] (0xc000138840) (0xc000776780) Stream removed, broadcasting: 5\n"
Jul  7 11:17:41.335: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  7 11:17:41.335: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  7 11:17:41.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-nhzls ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:17:41.521: INFO: stderr: "I0707 11:17:41.454141     520 log.go:172] (0xc0008042c0) (0xc00071c640) Create stream\nI0707 11:17:41.454213     520 log.go:172] (0xc0008042c0) (0xc00071c640) Stream added, broadcasting: 1\nI0707 11:17:41.456542     520 log.go:172] (0xc0008042c0) Reply frame received for 1\nI0707 11:17:41.456575     520 log.go:172] (0xc0008042c0) (0xc00060cfa0) Create stream\nI0707 11:17:41.456585     520 log.go:172] (0xc0008042c0) (0xc00060cfa0) Stream added, broadcasting: 3\nI0707 11:17:41.457488     520 log.go:172] (0xc0008042c0) Reply frame received for 3\nI0707 11:17:41.457512     520 log.go:172] (0xc0008042c0) (0xc00071c6e0) Create stream\nI0707 11:17:41.457519     520 log.go:172] (0xc0008042c0) (0xc00071c6e0) Stream added, broadcasting: 5\nI0707 11:17:41.458321     520 log.go:172] (0xc0008042c0) Reply frame received for 5\nI0707 11:17:41.516237     520 log.go:172] (0xc0008042c0) Data frame received for 5\nI0707 11:17:41.516290     520 log.go:172] (0xc00071c6e0) (5) Data frame handling\nI0707 11:17:41.516363     520 log.go:172] (0xc0008042c0) Data frame received for 3\nI0707 11:17:41.516411     520 log.go:172] (0xc00060cfa0) (3) Data frame handling\nI0707 11:17:41.516442     520 log.go:172] (0xc00060cfa0) (3) Data frame sent\nI0707 11:17:41.516476     520 log.go:172] (0xc0008042c0) Data frame received for 3\nI0707 11:17:41.516496     520 log.go:172] (0xc00060cfa0) (3) Data frame handling\nI0707 11:17:41.517678     520 log.go:172] (0xc0008042c0) Data frame received for 1\nI0707 11:17:41.517706     520 log.go:172] (0xc00071c640) (1) Data frame handling\nI0707 11:17:41.517734     520 log.go:172] (0xc00071c640) (1) Data frame sent\nI0707 11:17:41.517764     520 log.go:172] (0xc0008042c0) (0xc00071c640) Stream removed, broadcasting: 1\nI0707 11:17:41.517791     520 log.go:172] (0xc0008042c0) Go away received\nI0707 11:17:41.517963     520 log.go:172] (0xc0008042c0) (0xc00071c640) Stream removed, broadcasting: 1\nI0707 11:17:41.517985     520 log.go:172] (0xc0008042c0) (0xc00060cfa0) Stream removed, broadcasting: 3\nI0707 11:17:41.517997     520 log.go:172] (0xc0008042c0) (0xc00071c6e0) Stream removed, broadcasting: 5\n"
Jul  7 11:17:41.521: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  7 11:17:41.521: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  7 11:17:41.521: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul  7 11:18:21.535: INFO: Deleting all statefulset in ns e2e-tests-statefulset-nhzls
Jul  7 11:18:21.538: INFO: Scaling statefulset ss to 0
Jul  7 11:18:21.547: INFO: Waiting for statefulset status.replicas updated to 0
Jul  7 11:18:21.550: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:18:21.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-nhzls" for this suite.
Jul  7 11:18:29.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:18:29.685: INFO: namespace: e2e-tests-statefulset-nhzls, resource: bindings, ignored listing per whitelist
Jul  7 11:18:29.696: INFO: namespace e2e-tests-statefulset-nhzls deletion completed in 8.096223836s

• [SLOW TEST:120.767 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:18:29.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-tr775
Jul  7 11:18:34.108: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-tr775
STEP: checking the pod's current state and verifying that restartCount is present
Jul  7 11:18:34.111: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:22:35.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-tr775" for this suite.
Jul  7 11:22:41.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:22:41.387: INFO: namespace: e2e-tests-container-probe-tr775, resource: bindings, ignored listing per whitelist
Jul  7 11:22:41.486: INFO: namespace e2e-tests-container-probe-tr775 deletion completed in 6.152369158s

• [SLOW TEST:251.790 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:22:41.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  7 11:22:41.623: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2bf5e9b4-c044-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-downward-api-hp2br" to be "success or failure"
Jul  7 11:22:41.629: INFO: Pod "downwardapi-volume-2bf5e9b4-c044-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.746632ms
Jul  7 11:22:43.633: INFO: Pod "downwardapi-volume-2bf5e9b4-c044-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009751467s
Jul  7 11:22:45.637: INFO: Pod "downwardapi-volume-2bf5e9b4-c044-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013487446s
STEP: Saw pod success
Jul  7 11:22:45.637: INFO: Pod "downwardapi-volume-2bf5e9b4-c044-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:22:45.639: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-2bf5e9b4-c044-11ea-9ad7-0242ac11001b container client-container: 
STEP: delete the pod
Jul  7 11:22:45.705: INFO: Waiting for pod downwardapi-volume-2bf5e9b4-c044-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:22:45.713: INFO: Pod downwardapi-volume-2bf5e9b4-c044-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:22:45.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hp2br" for this suite.
Jul  7 11:22:51.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:22:51.785: INFO: namespace: e2e-tests-downward-api-hp2br, resource: bindings, ignored listing per whitelist
Jul  7 11:22:51.815: INFO: namespace e2e-tests-downward-api-hp2br deletion completed in 6.098827466s

• [SLOW TEST:10.329 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:22:51.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jul  7 11:22:51.984: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-6d647,SelfLink:/api/v1/namespaces/e2e-tests-watch-6d647/configmaps/e2e-watch-test-resource-version,UID:3218a7d8-c044-11ea-a300-0242ac110004,ResourceVersion:595575,Generation:0,CreationTimestamp:2020-07-07 11:22:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul  7 11:22:51.984: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-6d647,SelfLink:/api/v1/namespaces/e2e-tests-watch-6d647/configmaps/e2e-watch-test-resource-version,UID:3218a7d8-c044-11ea-a300-0242ac110004,ResourceVersion:595576,Generation:0,CreationTimestamp:2020-07-07 11:22:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:22:51.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-6d647" for this suite.
Jul  7 11:22:57.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:22:58.031: INFO: namespace: e2e-tests-watch-6d647, resource: bindings, ignored listing per whitelist
Jul  7 11:22:58.072: INFO: namespace e2e-tests-watch-6d647 deletion completed in 6.08235561s

• [SLOW TEST:6.256 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:22:58.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:22:58.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-qdpfl" for this suite.
Jul  7 11:23:04.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:23:04.424: INFO: namespace: e2e-tests-kubelet-test-qdpfl, resource: bindings, ignored listing per whitelist
Jul  7 11:23:04.426: INFO: namespace e2e-tests-kubelet-test-qdpfl deletion completed in 6.122855243s

• [SLOW TEST:6.353 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:23:04.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jul  7 11:23:04.544: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  7 11:23:04.556: INFO: Waiting for terminating namespaces to be deleted...
Jul  7 11:23:04.559: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Jul  7 11:23:04.564: INFO: kube-proxy-cqbm8 from kube-system started at 2020-07-04 07:47:44 +0000 UTC (1 container statuses recorded)
Jul  7 11:23:04.564: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  7 11:23:04.564: INFO: kindnet-mcn92 from kube-system started at 2020-07-04 07:47:46 +0000 UTC (1 container statuses recorded)
Jul  7 11:23:04.564: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  7 11:23:04.564: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Jul  7 11:23:04.570: INFO: local-path-provisioner-674595c7-cvgpb from local-path-storage started at 2020-07-04 07:48:14 +0000 UTC (1 container statuses recorded)
Jul  7 11:23:04.570: INFO: 	Container local-path-provisioner ready: true, restart count 2
Jul  7 11:23:04.570: INFO: coredns-54ff9cd656-mgg2q from kube-system started at 2020-07-04 07:48:14 +0000 UTC (1 container statuses recorded)
Jul  7 11:23:04.570: INFO: 	Container coredns ready: true, restart count 0
Jul  7 11:23:04.570: INFO: coredns-54ff9cd656-l7q92 from kube-system started at 2020-07-04 07:48:14 +0000 UTC (1 container statuses recorded)
Jul  7 11:23:04.570: INFO: 	Container coredns ready: true, restart count 0
Jul  7 11:23:04.570: INFO: kube-proxy-52vr2 from kube-system started at 2020-07-04 07:47:44 +0000 UTC (1 container statuses recorded)
Jul  7 11:23:04.570: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  7 11:23:04.570: INFO: kindnet-rll2b from kube-system started at 2020-07-04 07:47:46 +0000 UTC (1 container statuses recorded)
Jul  7 11:23:04.570: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-worker
STEP: verifying the node has the label node hunter-worker2
Jul  7 11:23:04.730: INFO: Pod coredns-54ff9cd656-l7q92 requesting resource cpu=100m on Node hunter-worker2
Jul  7 11:23:04.731: INFO: Pod coredns-54ff9cd656-mgg2q requesting resource cpu=100m on Node hunter-worker2
Jul  7 11:23:04.731: INFO: Pod kindnet-mcn92 requesting resource cpu=100m on Node hunter-worker
Jul  7 11:23:04.731: INFO: Pod kindnet-rll2b requesting resource cpu=100m on Node hunter-worker2
Jul  7 11:23:04.731: INFO: Pod kube-proxy-52vr2 requesting resource cpu=0m on Node hunter-worker2
Jul  7 11:23:04.731: INFO: Pod kube-proxy-cqbm8 requesting resource cpu=0m on Node hunter-worker
Jul  7 11:23:04.731: INFO: Pod local-path-provisioner-674595c7-cvgpb requesting resource cpu=0m on Node hunter-worker2
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-39be69ef-c044-11ea-9ad7-0242ac11001b.161f7496f0d24459], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-cxzv4/filler-pod-39be69ef-c044-11ea-9ad7-0242ac11001b to hunter-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-39be69ef-c044-11ea-9ad7-0242ac11001b.161f74974497d97c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-39be69ef-c044-11ea-9ad7-0242ac11001b.161f7497aaf2501c], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-39be69ef-c044-11ea-9ad7-0242ac11001b.161f7497bff76696], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-39bf3940-c044-11ea-9ad7-0242ac11001b.161f7496f25bd58b], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-cxzv4/filler-pod-39bf3940-c044-11ea-9ad7-0242ac11001b to hunter-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-39bf3940-c044-11ea-9ad7-0242ac11001b.161f749757108888], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-39bf3940-c044-11ea-9ad7-0242ac11001b.161f7497c3e61510], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-39bf3940-c044-11ea-9ad7-0242ac11001b.161f7497d405468c], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.161f749859238081], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node hunter-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node hunter-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:23:11.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-cxzv4" for this suite.
Jul  7 11:23:17.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:23:18.014: INFO: namespace: e2e-tests-sched-pred-cxzv4, resource: bindings, ignored listing per whitelist
Jul  7 11:23:18.079: INFO: namespace e2e-tests-sched-pred-cxzv4 deletion completed in 6.124959264s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:13.653 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:23:18.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jul  7 11:23:18.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wc5d7'
Jul  7 11:23:22.285: INFO: stderr: ""
Jul  7 11:23:22.285: INFO: stdout: "pod/pause created\n"
Jul  7 11:23:22.285: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jul  7 11:23:22.285: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-wc5d7" to be "running and ready"
Jul  7 11:23:22.291: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059149ms
Jul  7 11:23:24.295: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01021669s
Jul  7 11:23:26.300: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.014552043s
Jul  7 11:23:26.300: INFO: Pod "pause" satisfied condition "running and ready"
Jul  7 11:23:26.300: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jul  7 11:23:26.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-wc5d7'
Jul  7 11:23:26.422: INFO: stderr: ""
Jul  7 11:23:26.422: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jul  7 11:23:26.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-wc5d7'
Jul  7 11:23:26.521: INFO: stderr: ""
Jul  7 11:23:26.521: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jul  7 11:23:26.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-wc5d7'
Jul  7 11:23:26.616: INFO: stderr: ""
Jul  7 11:23:26.616: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jul  7 11:23:26.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-wc5d7'
Jul  7 11:23:26.715: INFO: stderr: ""
Jul  7 11:23:26.715: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jul  7 11:23:26.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wc5d7'
Jul  7 11:23:26.858: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  7 11:23:26.858: INFO: stdout: "pod \"pause\" force deleted\n"
Jul  7 11:23:26.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-wc5d7'
Jul  7 11:23:26.962: INFO: stderr: "No resources found.\n"
Jul  7 11:23:26.962: INFO: stdout: ""
Jul  7 11:23:26.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-wc5d7 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  7 11:23:27.051: INFO: stderr: ""
Jul  7 11:23:27.051: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:23:27.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wc5d7" for this suite.
Jul  7 11:23:33.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:23:33.312: INFO: namespace: e2e-tests-kubectl-wc5d7, resource: bindings, ignored listing per whitelist
Jul  7 11:23:33.345: INFO: namespace e2e-tests-kubectl-wc5d7 deletion completed in 6.290428826s

• [SLOW TEST:15.266 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:23:33.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-4adb15f0-c044-11ea-9ad7-0242ac11001b
STEP: Creating a pod to test consume secrets
Jul  7 11:23:33.491: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4adda323-c044-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-7vw7q" to be "success or failure"
Jul  7 11:23:33.524: INFO: Pod "pod-projected-secrets-4adda323-c044-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 33.066054ms
Jul  7 11:23:35.528: INFO: Pod "pod-projected-secrets-4adda323-c044-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037528355s
Jul  7 11:23:37.537: INFO: Pod "pod-projected-secrets-4adda323-c044-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046181718s
STEP: Saw pod success
Jul  7 11:23:37.537: INFO: Pod "pod-projected-secrets-4adda323-c044-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:23:37.540: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-4adda323-c044-11ea-9ad7-0242ac11001b container projected-secret-volume-test: 
STEP: delete the pod
Jul  7 11:23:37.571: INFO: Waiting for pod pod-projected-secrets-4adda323-c044-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:23:37.582: INFO: Pod pod-projected-secrets-4adda323-c044-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:23:37.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-7vw7q" for this suite.
Jul  7 11:23:43.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:23:43.678: INFO: namespace: e2e-tests-projected-7vw7q, resource: bindings, ignored listing per whitelist
Jul  7 11:23:43.701: INFO: namespace e2e-tests-projected-7vw7q deletion completed in 6.115844805s

• [SLOW TEST:10.356 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:23:43.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  7 11:23:43.844: INFO: Waiting up to 5m0s for pod "downwardapi-volume-510c0427-c044-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-47kgs" to be "success or failure"
Jul  7 11:23:43.848: INFO: Pod "downwardapi-volume-510c0427-c044-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.47128ms
Jul  7 11:23:45.852: INFO: Pod "downwardapi-volume-510c0427-c044-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007920664s
Jul  7 11:23:47.857: INFO: Pod "downwardapi-volume-510c0427-c044-11ea-9ad7-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.013066282s
Jul  7 11:23:49.862: INFO: Pod "downwardapi-volume-510c0427-c044-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017449307s
STEP: Saw pod success
Jul  7 11:23:49.862: INFO: Pod "downwardapi-volume-510c0427-c044-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:23:49.865: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-510c0427-c044-11ea-9ad7-0242ac11001b container client-container: 
STEP: delete the pod
Jul  7 11:23:49.886: INFO: Waiting for pod downwardapi-volume-510c0427-c044-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:23:49.904: INFO: Pod downwardapi-volume-510c0427-c044-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:23:49.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-47kgs" for this suite.
Jul  7 11:23:55.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:23:56.001: INFO: namespace: e2e-tests-projected-47kgs, resource: bindings, ignored listing per whitelist
Jul  7 11:23:56.008: INFO: namespace e2e-tests-projected-47kgs deletion completed in 6.099848822s

• [SLOW TEST:12.306 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:23:56.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jul  7 11:23:56.115: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:24:03.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-jnw5z" for this suite.
Jul  7 11:24:09.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:24:09.296: INFO: namespace: e2e-tests-init-container-jnw5z, resource: bindings, ignored listing per whitelist
Jul  7 11:24:09.338: INFO: namespace e2e-tests-init-container-jnw5z deletion completed in 6.095101412s

• [SLOW TEST:13.330 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:24:09.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jul  7 11:24:09.458: INFO: Waiting up to 5m0s for pod "client-containers-604f6ea7-c044-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-containers-fdm9p" to be "success or failure"
Jul  7 11:24:09.466: INFO: Pod "client-containers-604f6ea7-c044-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.102918ms
Jul  7 11:24:11.469: INFO: Pod "client-containers-604f6ea7-c044-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01072401s
Jul  7 11:24:13.483: INFO: Pod "client-containers-604f6ea7-c044-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024524883s
STEP: Saw pod success
Jul  7 11:24:13.483: INFO: Pod "client-containers-604f6ea7-c044-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:24:13.486: INFO: Trying to get logs from node hunter-worker2 pod client-containers-604f6ea7-c044-11ea-9ad7-0242ac11001b container test-container: 
STEP: delete the pod
Jul  7 11:24:13.508: INFO: Waiting for pod client-containers-604f6ea7-c044-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:24:13.513: INFO: Pod client-containers-604f6ea7-c044-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:24:13.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-fdm9p" for this suite.
Jul  7 11:24:19.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:24:19.549: INFO: namespace: e2e-tests-containers-fdm9p, resource: bindings, ignored listing per whitelist
Jul  7 11:24:19.612: INFO: namespace e2e-tests-containers-fdm9p deletion completed in 6.095981907s

• [SLOW TEST:10.274 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:24:19.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  7 11:24:19.735: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66725d93-c044-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-downward-api-qq78h" to be "success or failure"
Jul  7 11:24:19.748: INFO: Pod "downwardapi-volume-66725d93-c044-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.236449ms
Jul  7 11:24:21.752: INFO: Pod "downwardapi-volume-66725d93-c044-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016343674s
Jul  7 11:24:23.795: INFO: Pod "downwardapi-volume-66725d93-c044-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059627544s
STEP: Saw pod success
Jul  7 11:24:23.795: INFO: Pod "downwardapi-volume-66725d93-c044-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:24:23.799: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-66725d93-c044-11ea-9ad7-0242ac11001b container client-container: 
STEP: delete the pod
Jul  7 11:24:23.854: INFO: Waiting for pod downwardapi-volume-66725d93-c044-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:24:23.877: INFO: Pod downwardapi-volume-66725d93-c044-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:24:23.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-qq78h" for this suite.
Jul  7 11:24:29.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:24:29.926: INFO: namespace: e2e-tests-downward-api-qq78h, resource: bindings, ignored listing per whitelist
Jul  7 11:24:29.967: INFO: namespace e2e-tests-downward-api-qq78h deletion completed in 6.085975715s

• [SLOW TEST:10.354 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:24:29.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jul  7 11:24:30.052: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7mmcl,SelfLink:/api/v1/namespaces/e2e-tests-watch-7mmcl/configmaps/e2e-watch-test-configmap-a,UID:6c991e61-c044-11ea-a300-0242ac110004,ResourceVersion:596001,Generation:0,CreationTimestamp:2020-07-07 11:24:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul  7 11:24:30.052: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7mmcl,SelfLink:/api/v1/namespaces/e2e-tests-watch-7mmcl/configmaps/e2e-watch-test-configmap-a,UID:6c991e61-c044-11ea-a300-0242ac110004,ResourceVersion:596001,Generation:0,CreationTimestamp:2020-07-07 11:24:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jul  7 11:24:40.061: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7mmcl,SelfLink:/api/v1/namespaces/e2e-tests-watch-7mmcl/configmaps/e2e-watch-test-configmap-a,UID:6c991e61-c044-11ea-a300-0242ac110004,ResourceVersion:596021,Generation:0,CreationTimestamp:2020-07-07 11:24:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jul  7 11:24:40.061: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7mmcl,SelfLink:/api/v1/namespaces/e2e-tests-watch-7mmcl/configmaps/e2e-watch-test-configmap-a,UID:6c991e61-c044-11ea-a300-0242ac110004,ResourceVersion:596021,Generation:0,CreationTimestamp:2020-07-07 11:24:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jul  7 11:24:50.068: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7mmcl,SelfLink:/api/v1/namespaces/e2e-tests-watch-7mmcl/configmaps/e2e-watch-test-configmap-a,UID:6c991e61-c044-11ea-a300-0242ac110004,ResourceVersion:596041,Generation:0,CreationTimestamp:2020-07-07 11:24:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul  7 11:24:50.069: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7mmcl,SelfLink:/api/v1/namespaces/e2e-tests-watch-7mmcl/configmaps/e2e-watch-test-configmap-a,UID:6c991e61-c044-11ea-a300-0242ac110004,ResourceVersion:596041,Generation:0,CreationTimestamp:2020-07-07 11:24:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jul  7 11:25:00.076: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7mmcl,SelfLink:/api/v1/namespaces/e2e-tests-watch-7mmcl/configmaps/e2e-watch-test-configmap-a,UID:6c991e61-c044-11ea-a300-0242ac110004,ResourceVersion:596061,Generation:0,CreationTimestamp:2020-07-07 11:24:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul  7 11:25:00.076: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-7mmcl,SelfLink:/api/v1/namespaces/e2e-tests-watch-7mmcl/configmaps/e2e-watch-test-configmap-a,UID:6c991e61-c044-11ea-a300-0242ac110004,ResourceVersion:596061,Generation:0,CreationTimestamp:2020-07-07 11:24:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jul  7 11:25:10.083: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7mmcl,SelfLink:/api/v1/namespaces/e2e-tests-watch-7mmcl/configmaps/e2e-watch-test-configmap-b,UID:8474dd70-c044-11ea-a300-0242ac110004,ResourceVersion:596081,Generation:0,CreationTimestamp:2020-07-07 11:25:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul  7 11:25:10.083: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7mmcl,SelfLink:/api/v1/namespaces/e2e-tests-watch-7mmcl/configmaps/e2e-watch-test-configmap-b,UID:8474dd70-c044-11ea-a300-0242ac110004,ResourceVersion:596081,Generation:0,CreationTimestamp:2020-07-07 11:25:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jul  7 11:25:20.090: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7mmcl,SelfLink:/api/v1/namespaces/e2e-tests-watch-7mmcl/configmaps/e2e-watch-test-configmap-b,UID:8474dd70-c044-11ea-a300-0242ac110004,ResourceVersion:596101,Generation:0,CreationTimestamp:2020-07-07 11:25:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul  7 11:25:20.090: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-7mmcl,SelfLink:/api/v1/namespaces/e2e-tests-watch-7mmcl/configmaps/e2e-watch-test-configmap-b,UID:8474dd70-c044-11ea-a300-0242ac110004,ResourceVersion:596101,Generation:0,CreationTimestamp:2020-07-07 11:25:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:25:30.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-7mmcl" for this suite.
Jul  7 11:25:36.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:25:36.127: INFO: namespace: e2e-tests-watch-7mmcl, resource: bindings, ignored listing per whitelist
Jul  7 11:25:36.191: INFO: namespace e2e-tests-watch-7mmcl deletion completed in 6.095143278s

• [SLOW TEST:66.224 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:25:36.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  7 11:25:36.307: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  7 11:25:42.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Jul  7 11:25:42.732: INFO: stderr: ""
Jul  7 11:25:42.732: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-07-07T09:19:16Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Jul  7 11:25:42.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tvwdr'
Jul  7 11:25:43.030: INFO: stderr: ""
Jul  7 11:25:43.030: INFO: stdout: "replicationcontroller/redis-master created\n"
Jul  7 11:25:43.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tvwdr'
Jul  7 11:25:43.360: INFO: stderr: ""
Jul  7 11:25:43.360: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Jul  7 11:25:44.365: INFO: Selector matched 1 pods for map[app:redis]
Jul  7 11:25:44.365: INFO: Found 0 / 1
Jul  7 11:25:45.365: INFO: Selector matched 1 pods for map[app:redis]
Jul  7 11:25:45.365: INFO: Found 0 / 1
Jul  7 11:25:46.365: INFO: Selector matched 1 pods for map[app:redis]
Jul  7 11:25:46.365: INFO: Found 1 / 1
Jul  7 11:25:46.365: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul  7 11:25:46.369: INFO: Selector matched 1 pods for map[app:redis]
Jul  7 11:25:46.369: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul  7 11:25:46.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-5q8tk --namespace=e2e-tests-kubectl-tvwdr'
Jul  7 11:25:46.485: INFO: stderr: ""
Jul  7 11:25:46.485: INFO: stdout: "Name:               redis-master-5q8tk\nNamespace:          e2e-tests-kubectl-tvwdr\nPriority:           0\nPriorityClassName:  \nNode:               hunter-worker2/172.17.0.2\nStart Time:         Tue, 07 Jul 2020 11:25:43 +0000\nLabels:             app=redis\n                    role=master\nAnnotations:        \nStatus:             Running\nIP:                 10.244.1.22\nControlled By:      ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://03be7870fc21ece20d3c139f7c035a537ae7371b9eb9e2691caf506be4af7cb1\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 07 Jul 2020 11:25:45 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-v7sdp (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-v7sdp:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-v7sdp\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                     Message\n  ----    ------     ----  ----                     -------\n  Normal  Scheduled  3s    default-scheduler        Successfully assigned e2e-tests-kubectl-tvwdr/redis-master-5q8tk to hunter-worker2\n  Normal  Pulled     2s    kubelet, hunter-worker2  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    1s    kubelet, hunter-worker2  Created container\n  Normal  Started    1s    kubelet, hunter-worker2  Started container\n"
Jul  7 11:25:46.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-tvwdr'
Jul  7 11:25:46.606: INFO: stderr: ""
Jul  7 11:25:46.606: INFO: stdout: "Name:         redis-master\nNamespace:    e2e-tests-kubectl-tvwdr\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  3s    replication-controller  Created pod: redis-master-5q8tk\n"
Jul  7 11:25:46.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-tvwdr'
Jul  7 11:25:46.732: INFO: stderr: ""
Jul  7 11:25:46.732: INFO: stdout: "Name:              redis-master\nNamespace:         e2e-tests-kubectl-tvwdr\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.111.45.91\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.1.22:6379\nSession Affinity:  None\nEvents:            \n"
Jul  7 11:25:46.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane'
Jul  7 11:25:46.869: INFO: stderr: ""
Jul  7 11:25:46.869: INFO: stdout: "Name:               hunter-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/hostname=hunter-control-plane\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jul 2020 07:47:23 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Tue, 07 Jul 2020 11:25:39 +0000   Sat, 04 Jul 2020 07:47:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Tue, 07 Jul 2020 11:25:39 +0000   Sat, 04 Jul 2020 07:47:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Tue, 07 Jul 2020 11:25:39 +0000   Sat, 04 Jul 2020 07:47:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Tue, 07 Jul 2020 11:25:39 +0000   Sat, 04 Jul 2020 07:48:14 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.17.0.4\n  Hostname:    hunter-control-plane\nCapacity:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759892Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759892Ki\n pods:               110\nSystem Info:\n Machine ID:                 268105a9121e48d584b7113fd8a9e3a1\n System UUID:                0e585f84-1906-441c-90cd-c4ab5eda753d\n Boot ID:                    ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version:             4.15.0-88-generic\n OS Image:                   Ubuntu 19.10\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.3.3-14-g449e9269\n Kubelet Version:            v1.13.12\n Kube-Proxy Version:         v1.13.12\nPodCIDR:                     10.244.0.0/24\nNon-terminated Pods:         (6 in total)\n  Namespace                  Name                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                            ------------  ----------  ---------------  -------------  ---\n  kube-system                etcd-hunter-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3d3h\n  kube-system                kindnet-9q4t6                                   100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      3d3h\n  kube-system                kube-apiserver-hunter-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         3d3h\n  kube-system                kube-controller-manager-hunter-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         3d3h\n  kube-system                kube-proxy-dmvsw                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         3d3h\n  kube-system                kube-scheduler-hunter-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         3d3h\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests   Limits\n  --------           --------   ------\n  cpu                650m (4%)  100m (0%)\n  memory             50Mi (0%)  50Mi (0%)\n  ephemeral-storage  0 (0%)     0 (0%)\nEvents:              \n"
Jul  7 11:25:46.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-tvwdr'
Jul  7 11:25:46.974: INFO: stderr: ""
Jul  7 11:25:46.974: INFO: stdout: "Name:         e2e-tests-kubectl-tvwdr\nLabels:       e2e-framework=kubectl\n              e2e-run=32654ca2-c03f-11ea-9ad7-0242ac11001b\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:25:46.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tvwdr" for this suite.
Jul  7 11:26:09.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:26:09.087: INFO: namespace: e2e-tests-kubectl-tvwdr, resource: bindings, ignored listing per whitelist
Jul  7 11:26:09.091: INFO: namespace e2e-tests-kubectl-tvwdr deletion completed in 22.11410213s

• [SLOW TEST:26.494 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:26:09.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:26:13.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-v97z4" for this suite.
Jul  7 11:26:51.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:26:51.323: INFO: namespace: e2e-tests-kubelet-test-v97z4, resource: bindings, ignored listing per whitelist
Jul  7 11:26:51.349: INFO: namespace e2e-tests-kubelet-test-v97z4 deletion completed in 38.086691197s

• [SLOW TEST:42.258 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:26:51.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  7 11:26:51.491: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jul  7 11:26:51.511: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:26:51.513: INFO: Number of nodes with available pods: 0
Jul  7 11:26:51.513: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:26:52.517: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:26:52.520: INFO: Number of nodes with available pods: 0
Jul  7 11:26:52.520: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:26:53.518: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:26:53.520: INFO: Number of nodes with available pods: 0
Jul  7 11:26:53.520: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:26:54.517: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:26:54.521: INFO: Number of nodes with available pods: 0
Jul  7 11:26:54.521: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:26:55.518: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:26:55.522: INFO: Number of nodes with available pods: 1
Jul  7 11:26:55.523: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:26:56.516: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:26:56.518: INFO: Number of nodes with available pods: 2
Jul  7 11:26:56.518: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jul  7 11:26:56.546: INFO: Wrong image for pod: daemon-set-27xzv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:26:56.546: INFO: Wrong image for pod: daemon-set-9njdx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:26:56.569: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:26:57.574: INFO: Wrong image for pod: daemon-set-27xzv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:26:57.575: INFO: Wrong image for pod: daemon-set-9njdx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:26:57.578: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:26:58.574: INFO: Wrong image for pod: daemon-set-27xzv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:26:58.574: INFO: Wrong image for pod: daemon-set-9njdx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:26:58.578: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:26:59.574: INFO: Wrong image for pod: daemon-set-27xzv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:26:59.574: INFO: Wrong image for pod: daemon-set-9njdx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:26:59.574: INFO: Pod daemon-set-9njdx is not available
Jul  7 11:26:59.578: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:27:00.574: INFO: Wrong image for pod: daemon-set-27xzv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:27:00.574: INFO: Wrong image for pod: daemon-set-9njdx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:27:00.574: INFO: Pod daemon-set-9njdx is not available
Jul  7 11:27:00.578: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:27:01.573: INFO: Wrong image for pod: daemon-set-27xzv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:27:01.573: INFO: Wrong image for pod: daemon-set-9njdx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:27:01.573: INFO: Pod daemon-set-9njdx is not available
Jul  7 11:27:01.576: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:27:02.574: INFO: Wrong image for pod: daemon-set-27xzv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:27:02.574: INFO: Wrong image for pod: daemon-set-9njdx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:27:02.574: INFO: Pod daemon-set-9njdx is not available
Jul  7 11:27:02.579: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:27:03.574: INFO: Wrong image for pod: daemon-set-27xzv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:27:03.574: INFO: Wrong image for pod: daemon-set-9njdx. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:27:03.574: INFO: Pod daemon-set-9njdx is not available
Jul  7 11:27:03.579: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:27:04.573: INFO: Wrong image for pod: daemon-set-27xzv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:27:04.573: INFO: Pod daemon-set-qsjgm is not available
Jul  7 11:27:04.576: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:27:06.187: INFO: Wrong image for pod: daemon-set-27xzv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:27:06.187: INFO: Pod daemon-set-qsjgm is not available
Jul  7 11:27:06.219: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:27:06.798: INFO: Wrong image for pod: daemon-set-27xzv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:27:06.798: INFO: Pod daemon-set-qsjgm is not available
Jul  7 11:27:06.802: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:27:07.574: INFO: Wrong image for pod: daemon-set-27xzv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:27:07.574: INFO: Pod daemon-set-qsjgm is not available
Jul  7 11:27:07.578: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:27:08.573: INFO: Wrong image for pod: daemon-set-27xzv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:27:08.573: INFO: Pod daemon-set-qsjgm is not available
Jul  7 11:27:08.616: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:27:09.573: INFO: Wrong image for pod: daemon-set-27xzv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:27:09.577: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:27:10.574: INFO: Wrong image for pod: daemon-set-27xzv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:27:10.578: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:27:11.574: INFO: Wrong image for pod: daemon-set-27xzv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:27:11.574: INFO: Pod daemon-set-27xzv is not available
Jul  7 11:27:11.578: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:27:13.379: INFO: Wrong image for pod: daemon-set-27xzv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:27:13.380: INFO: Pod daemon-set-27xzv is not available
Jul  7 11:27:13.429: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:27:13.573: INFO: Wrong image for pod: daemon-set-27xzv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  7 11:27:13.573: INFO: Pod daemon-set-27xzv is not available
Jul  7 11:27:13.578: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:27:14.678: INFO: Pod daemon-set-jgfpc is not available
Jul  7 11:27:14.683: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Jul  7 11:27:14.687: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:27:14.690: INFO: Number of nodes with available pods: 1
Jul  7 11:27:14.690: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:27:15.695: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:27:15.698: INFO: Number of nodes with available pods: 1
Jul  7 11:27:15.698: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:27:16.696: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:27:16.742: INFO: Number of nodes with available pods: 1
Jul  7 11:27:16.742: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:27:17.762: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:27:17.766: INFO: Number of nodes with available pods: 1
Jul  7 11:27:17.766: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:27:18.697: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:27:18.700: INFO: Number of nodes with available pods: 2
Jul  7 11:27:18.700: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-xbrdv, will wait for the garbage collector to delete the pods
Jul  7 11:27:18.774: INFO: Deleting DaemonSet.extensions daemon-set took: 6.323089ms
Jul  7 11:27:18.975: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.444766ms
Jul  7 11:27:23.878: INFO: Number of nodes with available pods: 0
Jul  7 11:27:23.878: INFO: Number of running nodes: 0, number of available pods: 0
Jul  7 11:27:23.909: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-xbrdv/daemonsets","resourceVersion":"596494"},"items":null}

Jul  7 11:27:23.912: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-xbrdv/pods","resourceVersion":"596494"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:27:23.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-xbrdv" for this suite.
Jul  7 11:27:29.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:27:29.945: INFO: namespace: e2e-tests-daemonsets-xbrdv, resource: bindings, ignored listing per whitelist
Jul  7 11:27:30.010: INFO: namespace e2e-tests-daemonsets-xbrdv deletion completed in 6.087751797s

• [SLOW TEST:38.661 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:27:30.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jul  7 11:27:30.149: INFO: Waiting up to 5m0s for pod "client-containers-d7efc716-c044-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-containers-lj6sn" to be "success or failure"
Jul  7 11:27:30.152: INFO: Pod "client-containers-d7efc716-c044-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.146592ms
Jul  7 11:27:32.779: INFO: Pod "client-containers-d7efc716-c044-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.630340134s
Jul  7 11:27:34.783: INFO: Pod "client-containers-d7efc716-c044-11ea-9ad7-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.634177526s
Jul  7 11:27:36.788: INFO: Pod "client-containers-d7efc716-c044-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.638983765s
STEP: Saw pod success
Jul  7 11:27:36.788: INFO: Pod "client-containers-d7efc716-c044-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:27:36.791: INFO: Trying to get logs from node hunter-worker2 pod client-containers-d7efc716-c044-11ea-9ad7-0242ac11001b container test-container: 
STEP: delete the pod
Jul  7 11:27:36.818: INFO: Waiting for pod client-containers-d7efc716-c044-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:27:36.829: INFO: Pod client-containers-d7efc716-c044-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:27:36.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-lj6sn" for this suite.
Jul  7 11:27:42.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:27:42.970: INFO: namespace: e2e-tests-containers-lj6sn, resource: bindings, ignored listing per whitelist
Jul  7 11:27:43.037: INFO: namespace e2e-tests-containers-lj6sn deletion completed in 6.204995676s

• [SLOW TEST:13.026 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:27:43.037: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jul  7 11:27:47.223: INFO: Pod pod-hostip-dfb3c60b-c044-11ea-9ad7-0242ac11001b has hostIP: 172.17.0.3
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:27:47.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-gwkg5" for this suite.
Jul  7 11:28:09.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:28:09.365: INFO: namespace: e2e-tests-pods-gwkg5, resource: bindings, ignored listing per whitelist
Jul  7 11:28:09.436: INFO: namespace e2e-tests-pods-gwkg5 deletion completed in 22.209937808s

• [SLOW TEST:26.399 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:28:09.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  7 11:28:09.668: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jul  7 11:28:09.674: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-h5w5x/daemonsets","resourceVersion":"596665"},"items":null}

Jul  7 11:28:09.676: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-h5w5x/pods","resourceVersion":"596665"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:28:09.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-h5w5x" for this suite.
Jul  7 11:28:15.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:28:15.758: INFO: namespace: e2e-tests-daemonsets-h5w5x, resource: bindings, ignored listing per whitelist
Jul  7 11:28:15.790: INFO: namespace e2e-tests-daemonsets-h5w5x deletion completed in 6.102168479s

S [SKIPPING] [6.353 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jul  7 11:28:09.668: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:28:15.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-f350b3b9-c044-11ea-9ad7-0242ac11001b
Jul  7 11:28:16.130: INFO: Pod name my-hostname-basic-f350b3b9-c044-11ea-9ad7-0242ac11001b: Found 0 pods out of 1
Jul  7 11:28:21.133: INFO: Pod name my-hostname-basic-f350b3b9-c044-11ea-9ad7-0242ac11001b: Found 1 pods out of 1
Jul  7 11:28:21.133: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-f350b3b9-c044-11ea-9ad7-0242ac11001b" are running
Jul  7 11:28:21.136: INFO: Pod "my-hostname-basic-f350b3b9-c044-11ea-9ad7-0242ac11001b-87cst" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-07 11:28:16 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-07 11:28:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-07 11:28:20 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-07 11:28:16 +0000 UTC Reason: Message:}])
Jul  7 11:28:21.136: INFO: Trying to dial the pod
Jul  7 11:28:26.147: INFO: Controller my-hostname-basic-f350b3b9-c044-11ea-9ad7-0242ac11001b: Got expected result from replica 1 [my-hostname-basic-f350b3b9-c044-11ea-9ad7-0242ac11001b-87cst]: "my-hostname-basic-f350b3b9-c044-11ea-9ad7-0242ac11001b-87cst", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:28:26.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-fz6sz" for this suite.
Jul  7 11:28:32.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:28:32.227: INFO: namespace: e2e-tests-replication-controller-fz6sz, resource: bindings, ignored listing per whitelist
Jul  7 11:28:32.271: INFO: namespace e2e-tests-replication-controller-fz6sz deletion completed in 6.120308415s

• [SLOW TEST:16.481 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:28:32.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jul  7 11:28:32.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:33.238: INFO: stderr: ""
Jul  7 11:28:33.238: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  7 11:28:33.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:33.358: INFO: stderr: ""
Jul  7 11:28:33.358: INFO: stdout: "update-demo-nautilus-9prkk update-demo-nautilus-qd259 "
Jul  7 11:28:33.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9prkk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:33.696: INFO: stderr: ""
Jul  7 11:28:33.697: INFO: stdout: ""
Jul  7 11:28:33.697: INFO: update-demo-nautilus-9prkk is created but not running
Jul  7 11:28:38.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:38.813: INFO: stderr: ""
Jul  7 11:28:38.813: INFO: stdout: "update-demo-nautilus-9prkk update-demo-nautilus-qd259 "
Jul  7 11:28:38.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9prkk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:38.918: INFO: stderr: ""
Jul  7 11:28:38.918: INFO: stdout: "true"
Jul  7 11:28:38.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9prkk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:39.020: INFO: stderr: ""
Jul  7 11:28:39.020: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  7 11:28:39.020: INFO: validating pod update-demo-nautilus-9prkk
Jul  7 11:28:39.025: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  7 11:28:39.025: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  7 11:28:39.025: INFO: update-demo-nautilus-9prkk is verified up and running
Jul  7 11:28:39.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qd259 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:39.121: INFO: stderr: ""
Jul  7 11:28:39.121: INFO: stdout: "true"
Jul  7 11:28:39.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qd259 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:39.217: INFO: stderr: ""
Jul  7 11:28:39.217: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  7 11:28:39.217: INFO: validating pod update-demo-nautilus-qd259
Jul  7 11:28:39.227: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  7 11:28:39.227: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  7 11:28:39.227: INFO: update-demo-nautilus-qd259 is verified up and running
STEP: scaling down the replication controller
Jul  7 11:28:39.229: INFO: scanned /root for discovery docs: 
Jul  7 11:28:39.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:40.442: INFO: stderr: ""
Jul  7 11:28:40.442: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  7 11:28:40.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:40.554: INFO: stderr: ""
Jul  7 11:28:40.554: INFO: stdout: "update-demo-nautilus-9prkk update-demo-nautilus-qd259 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jul  7 11:28:45.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:45.647: INFO: stderr: ""
Jul  7 11:28:45.647: INFO: stdout: "update-demo-nautilus-9prkk "
Jul  7 11:28:45.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9prkk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:45.749: INFO: stderr: ""
Jul  7 11:28:45.749: INFO: stdout: "true"
Jul  7 11:28:45.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9prkk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:45.837: INFO: stderr: ""
Jul  7 11:28:45.837: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  7 11:28:45.837: INFO: validating pod update-demo-nautilus-9prkk
Jul  7 11:28:45.864: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  7 11:28:45.864: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  7 11:28:45.864: INFO: update-demo-nautilus-9prkk is verified up and running
STEP: scaling up the replication controller
Jul  7 11:28:45.867: INFO: scanned /root for discovery docs: 
Jul  7 11:28:45.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:47.010: INFO: stderr: ""
Jul  7 11:28:47.010: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  7 11:28:47.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:47.105: INFO: stderr: ""
Jul  7 11:28:47.105: INFO: stdout: "update-demo-nautilus-9prkk update-demo-nautilus-svkqc "
Jul  7 11:28:47.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9prkk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:47.204: INFO: stderr: ""
Jul  7 11:28:47.204: INFO: stdout: "true"
Jul  7 11:28:47.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9prkk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:47.301: INFO: stderr: ""
Jul  7 11:28:47.301: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  7 11:28:47.301: INFO: validating pod update-demo-nautilus-9prkk
Jul  7 11:28:47.304: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  7 11:28:47.304: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  7 11:28:47.304: INFO: update-demo-nautilus-9prkk is verified up and running
Jul  7 11:28:47.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-svkqc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:47.462: INFO: stderr: ""
Jul  7 11:28:47.462: INFO: stdout: ""
Jul  7 11:28:47.462: INFO: update-demo-nautilus-svkqc is created but not running
Jul  7 11:28:52.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:52.565: INFO: stderr: ""
Jul  7 11:28:52.565: INFO: stdout: "update-demo-nautilus-9prkk update-demo-nautilus-svkqc "
Jul  7 11:28:52.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9prkk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:52.658: INFO: stderr: ""
Jul  7 11:28:52.658: INFO: stdout: "true"
Jul  7 11:28:52.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9prkk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:52.757: INFO: stderr: ""
Jul  7 11:28:52.757: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  7 11:28:52.757: INFO: validating pod update-demo-nautilus-9prkk
Jul  7 11:28:52.760: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  7 11:28:52.760: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  7 11:28:52.760: INFO: update-demo-nautilus-9prkk is verified up and running
Jul  7 11:28:52.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-svkqc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:52.854: INFO: stderr: ""
Jul  7 11:28:52.854: INFO: stdout: "true"
Jul  7 11:28:52.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-svkqc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:52.945: INFO: stderr: ""
Jul  7 11:28:52.945: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  7 11:28:52.945: INFO: validating pod update-demo-nautilus-svkqc
Jul  7 11:28:52.950: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  7 11:28:52.950: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  7 11:28:52.950: INFO: update-demo-nautilus-svkqc is verified up and running
STEP: using delete to clean up resources
Jul  7 11:28:52.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:53.043: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  7 11:28:53.043: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul  7 11:28:53.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-2gg2m'
Jul  7 11:28:53.164: INFO: stderr: "No resources found.\n"
Jul  7 11:28:53.164: INFO: stdout: ""
Jul  7 11:28:53.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-2gg2m -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  7 11:28:53.264: INFO: stderr: ""
Jul  7 11:28:53.264: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:28:53.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2gg2m" for this suite.
Jul  7 11:28:59.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:28:59.335: INFO: namespace: e2e-tests-kubectl-2gg2m, resource: bindings, ignored listing per whitelist
Jul  7 11:28:59.368: INFO: namespace e2e-tests-kubectl-2gg2m deletion completed in 6.099466374s

• [SLOW TEST:27.097 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:28:59.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul  7 11:28:59.474: INFO: Waiting up to 5m0s for pod "pod-0d2e1ed6-c045-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-emptydir-cpc88" to be "success or failure"
Jul  7 11:28:59.477: INFO: Pod "pod-0d2e1ed6-c045-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.305957ms
Jul  7 11:29:01.481: INFO: Pod "pod-0d2e1ed6-c045-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006926312s
Jul  7 11:29:03.511: INFO: Pod "pod-0d2e1ed6-c045-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037187765s
Jul  7 11:29:05.523: INFO: Pod "pod-0d2e1ed6-c045-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049393235s
STEP: Saw pod success
Jul  7 11:29:05.524: INFO: Pod "pod-0d2e1ed6-c045-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:29:05.528: INFO: Trying to get logs from node hunter-worker2 pod pod-0d2e1ed6-c045-11ea-9ad7-0242ac11001b container test-container: 
STEP: delete the pod
Jul  7 11:29:05.655: INFO: Waiting for pod pod-0d2e1ed6-c045-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:29:05.694: INFO: Pod pod-0d2e1ed6-c045-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:29:05.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-cpc88" for this suite.
Jul  7 11:29:13.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:29:13.760: INFO: namespace: e2e-tests-emptydir-cpc88, resource: bindings, ignored listing per whitelist
Jul  7 11:29:13.793: INFO: namespace e2e-tests-emptydir-cpc88 deletion completed in 8.091406548s

• [SLOW TEST:14.424 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:29:13.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-557l
STEP: Creating a pod to test atomic-volume-subpath
Jul  7 11:29:14.213: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-557l" in namespace "e2e-tests-subpath-szq7t" to be "success or failure"
Jul  7 11:29:14.665: INFO: Pod "pod-subpath-test-configmap-557l": Phase="Pending", Reason="", readiness=false. Elapsed: 451.651483ms
Jul  7 11:29:16.670: INFO: Pod "pod-subpath-test-configmap-557l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.456693757s
Jul  7 11:29:18.674: INFO: Pod "pod-subpath-test-configmap-557l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.460954177s
Jul  7 11:29:20.677: INFO: Pod "pod-subpath-test-configmap-557l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.464384188s
Jul  7 11:29:22.834: INFO: Pod "pod-subpath-test-configmap-557l": Phase="Running", Reason="", readiness=false. Elapsed: 8.621008842s
Jul  7 11:29:24.837: INFO: Pod "pod-subpath-test-configmap-557l": Phase="Running", Reason="", readiness=false. Elapsed: 10.624341912s
Jul  7 11:29:26.842: INFO: Pod "pod-subpath-test-configmap-557l": Phase="Running", Reason="", readiness=false. Elapsed: 12.628863534s
Jul  7 11:29:28.846: INFO: Pod "pod-subpath-test-configmap-557l": Phase="Running", Reason="", readiness=false. Elapsed: 14.632674345s
Jul  7 11:29:30.849: INFO: Pod "pod-subpath-test-configmap-557l": Phase="Running", Reason="", readiness=false. Elapsed: 16.636506175s
Jul  7 11:29:32.854: INFO: Pod "pod-subpath-test-configmap-557l": Phase="Running", Reason="", readiness=false. Elapsed: 18.640753972s
Jul  7 11:29:34.857: INFO: Pod "pod-subpath-test-configmap-557l": Phase="Running", Reason="", readiness=false. Elapsed: 20.644440012s
Jul  7 11:29:36.861: INFO: Pod "pod-subpath-test-configmap-557l": Phase="Running", Reason="", readiness=false. Elapsed: 22.647999673s
Jul  7 11:29:39.212: INFO: Pod "pod-subpath-test-configmap-557l": Phase="Running", Reason="", readiness=false. Elapsed: 24.998587717s
Jul  7 11:29:41.216: INFO: Pod "pod-subpath-test-configmap-557l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.002926649s
STEP: Saw pod success
Jul  7 11:29:41.216: INFO: Pod "pod-subpath-test-configmap-557l" satisfied condition "success or failure"
Jul  7 11:29:41.219: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-557l container test-container-subpath-configmap-557l: 
STEP: delete the pod
Jul  7 11:29:41.271: INFO: Waiting for pod pod-subpath-test-configmap-557l to disappear
Jul  7 11:29:41.388: INFO: Pod pod-subpath-test-configmap-557l no longer exists
STEP: Deleting pod pod-subpath-test-configmap-557l
Jul  7 11:29:41.388: INFO: Deleting pod "pod-subpath-test-configmap-557l" in namespace "e2e-tests-subpath-szq7t"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:29:41.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-szq7t" for this suite.
Jul  7 11:29:47.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:29:47.746: INFO: namespace: e2e-tests-subpath-szq7t, resource: bindings, ignored listing per whitelist
Jul  7 11:29:47.748: INFO: namespace e2e-tests-subpath-szq7t deletion completed in 6.353037571s

• [SLOW TEST:33.954 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:29:47.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul  7 11:29:47.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-xbksm'
Jul  7 11:29:48.238: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  7 11:29:48.238: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jul  7 11:29:48.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-xbksm'
Jul  7 11:29:48.799: INFO: stderr: ""
Jul  7 11:29:48.799: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:29:48.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xbksm" for this suite.
Jul  7 11:29:55.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:29:55.276: INFO: namespace: e2e-tests-kubectl-xbksm, resource: bindings, ignored listing per whitelist
Jul  7 11:29:55.306: INFO: namespace e2e-tests-kubectl-xbksm deletion completed in 6.303621558s

• [SLOW TEST:7.557 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:29:55.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-zzqjt/configmap-test-2e8e99ba-c045-11ea-9ad7-0242ac11001b
STEP: Creating a pod to test consume configMaps
Jul  7 11:29:55.480: INFO: Waiting up to 5m0s for pod "pod-configmaps-2e90a582-c045-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-configmap-zzqjt" to be "success or failure"
Jul  7 11:29:55.498: INFO: Pod "pod-configmaps-2e90a582-c045-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.146314ms
Jul  7 11:29:57.502: INFO: Pod "pod-configmaps-2e90a582-c045-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02164826s
Jul  7 11:29:59.505: INFO: Pod "pod-configmaps-2e90a582-c045-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025351651s
Jul  7 11:30:01.651: INFO: Pod "pod-configmaps-2e90a582-c045-11ea-9ad7-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 6.171220568s
Jul  7 11:30:03.655: INFO: Pod "pod-configmaps-2e90a582-c045-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.174836183s
STEP: Saw pod success
Jul  7 11:30:03.655: INFO: Pod "pod-configmaps-2e90a582-c045-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:30:03.658: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-2e90a582-c045-11ea-9ad7-0242ac11001b container env-test: 
STEP: delete the pod
Jul  7 11:30:04.219: INFO: Waiting for pod pod-configmaps-2e90a582-c045-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:30:04.251: INFO: Pod pod-configmaps-2e90a582-c045-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:30:04.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-zzqjt" for this suite.
Jul  7 11:30:10.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:30:10.300: INFO: namespace: e2e-tests-configmap-zzqjt, resource: bindings, ignored listing per whitelist
Jul  7 11:30:10.355: INFO: namespace e2e-tests-configmap-zzqjt deletion completed in 6.100221645s

• [SLOW TEST:15.049 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:30:10.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jul  7 11:30:10.499: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:10.500: INFO: Number of nodes with available pods: 0
Jul  7 11:30:10.500: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:30:11.504: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:11.507: INFO: Number of nodes with available pods: 0
Jul  7 11:30:11.507: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:30:12.506: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:12.509: INFO: Number of nodes with available pods: 0
Jul  7 11:30:12.509: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:30:13.628: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:13.632: INFO: Number of nodes with available pods: 0
Jul  7 11:30:13.632: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:30:14.596: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:14.600: INFO: Number of nodes with available pods: 0
Jul  7 11:30:14.600: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:30:15.891: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:16.356: INFO: Number of nodes with available pods: 0
Jul  7 11:30:16.356: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:30:16.548: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:16.551: INFO: Number of nodes with available pods: 0
Jul  7 11:30:16.551: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:30:17.535: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:17.538: INFO: Number of nodes with available pods: 0
Jul  7 11:30:17.538: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 11:30:18.776: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:18.779: INFO: Number of nodes with available pods: 2
Jul  7 11:30:18.779: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jul  7 11:30:18.809: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:18.811: INFO: Number of nodes with available pods: 1
Jul  7 11:30:18.811: INFO: Node hunter-worker2 is running more than one daemon pod
Jul  7 11:30:19.816: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:19.821: INFO: Number of nodes with available pods: 1
Jul  7 11:30:19.821: INFO: Node hunter-worker2 is running more than one daemon pod
Jul  7 11:30:20.929: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:20.940: INFO: Number of nodes with available pods: 1
Jul  7 11:30:20.940: INFO: Node hunter-worker2 is running more than one daemon pod
Jul  7 11:30:21.817: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:21.821: INFO: Number of nodes with available pods: 1
Jul  7 11:30:21.821: INFO: Node hunter-worker2 is running more than one daemon pod
Jul  7 11:30:22.817: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:22.821: INFO: Number of nodes with available pods: 1
Jul  7 11:30:22.821: INFO: Node hunter-worker2 is running more than one daemon pod
Jul  7 11:30:23.817: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:23.820: INFO: Number of nodes with available pods: 1
Jul  7 11:30:23.820: INFO: Node hunter-worker2 is running more than one daemon pod
Jul  7 11:30:24.818: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:24.824: INFO: Number of nodes with available pods: 1
Jul  7 11:30:24.824: INFO: Node hunter-worker2 is running more than one daemon pod
Jul  7 11:30:25.816: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:25.820: INFO: Number of nodes with available pods: 1
Jul  7 11:30:25.820: INFO: Node hunter-worker2 is running more than one daemon pod
Jul  7 11:30:26.815: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:26.819: INFO: Number of nodes with available pods: 1
Jul  7 11:30:26.819: INFO: Node hunter-worker2 is running more than one daemon pod
Jul  7 11:30:27.816: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:27.820: INFO: Number of nodes with available pods: 1
Jul  7 11:30:27.820: INFO: Node hunter-worker2 is running more than one daemon pod
Jul  7 11:30:28.816: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:28.819: INFO: Number of nodes with available pods: 1
Jul  7 11:30:28.819: INFO: Node hunter-worker2 is running more than one daemon pod
Jul  7 11:30:29.816: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:29.819: INFO: Number of nodes with available pods: 1
Jul  7 11:30:29.819: INFO: Node hunter-worker2 is running more than one daemon pod
Jul  7 11:30:30.815: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:30.818: INFO: Number of nodes with available pods: 1
Jul  7 11:30:30.818: INFO: Node hunter-worker2 is running more than one daemon pod
Jul  7 11:30:31.817: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:31.820: INFO: Number of nodes with available pods: 1
Jul  7 11:30:31.820: INFO: Node hunter-worker2 is running more than one daemon pod
Jul  7 11:30:32.818: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:32.822: INFO: Number of nodes with available pods: 1
Jul  7 11:30:32.822: INFO: Node hunter-worker2 is running more than one daemon pod
Jul  7 11:30:33.816: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:33.820: INFO: Number of nodes with available pods: 1
Jul  7 11:30:33.820: INFO: Node hunter-worker2 is running more than one daemon pod
Jul  7 11:30:34.817: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:34.820: INFO: Number of nodes with available pods: 1
Jul  7 11:30:34.820: INFO: Node hunter-worker2 is running more than one daemon pod
Jul  7 11:30:35.816: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:35.820: INFO: Number of nodes with available pods: 1
Jul  7 11:30:35.820: INFO: Node hunter-worker2 is running more than one daemon pod
Jul  7 11:30:36.817: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:36.820: INFO: Number of nodes with available pods: 1
Jul  7 11:30:36.820: INFO: Node hunter-worker2 is running more than one daemon pod
Jul  7 11:30:37.817: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:37.820: INFO: Number of nodes with available pods: 1
Jul  7 11:30:37.820: INFO: Node hunter-worker2 is running more than one daemon pod
Jul  7 11:30:38.817: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  7 11:30:38.820: INFO: Number of nodes with available pods: 2
Jul  7 11:30:38.820: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-6zkh4, will wait for the garbage collector to delete the pods
Jul  7 11:30:38.882: INFO: Deleting DaemonSet.extensions daemon-set took: 6.842301ms
Jul  7 11:30:39.182: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.26056ms
Jul  7 11:30:54.086: INFO: Number of nodes with available pods: 0
Jul  7 11:30:54.086: INFO: Number of running nodes: 0, number of available pods: 0
Jul  7 11:30:54.089: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6zkh4/daemonsets","resourceVersion":"597259"},"items":null}

Jul  7 11:30:54.092: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6zkh4/pods","resourceVersion":"597259"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:30:54.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-6zkh4" for this suite.
Jul  7 11:31:00.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:31:00.339: INFO: namespace: e2e-tests-daemonsets-6zkh4, resource: bindings, ignored listing per whitelist
Jul  7 11:31:00.387: INFO: namespace e2e-tests-daemonsets-6zkh4 deletion completed in 6.124518286s

• [SLOW TEST:50.031 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:31:00.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:31:36.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-jsb8t" for this suite.
Jul  7 11:31:42.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:31:42.710: INFO: namespace: e2e-tests-container-runtime-jsb8t, resource: bindings, ignored listing per whitelist
Jul  7 11:31:42.734: INFO: namespace e2e-tests-container-runtime-jsb8t deletion completed in 6.150973389s

• [SLOW TEST:42.347 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:31:42.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-2g5s
STEP: Creating a pod to test atomic-volume-subpath
Jul  7 11:31:42.855: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-2g5s" in namespace "e2e-tests-subpath-zl6bn" to be "success or failure"
Jul  7 11:31:42.875: INFO: Pod "pod-subpath-test-downwardapi-2g5s": Phase="Pending", Reason="", readiness=false. Elapsed: 20.799898ms
Jul  7 11:31:44.880: INFO: Pod "pod-subpath-test-downwardapi-2g5s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025031756s
Jul  7 11:31:46.884: INFO: Pod "pod-subpath-test-downwardapi-2g5s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029233734s
Jul  7 11:31:48.888: INFO: Pod "pod-subpath-test-downwardapi-2g5s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033001168s
Jul  7 11:31:50.892: INFO: Pod "pod-subpath-test-downwardapi-2g5s": Phase="Running", Reason="", readiness=true. Elapsed: 8.037413691s
Jul  7 11:31:52.896: INFO: Pod "pod-subpath-test-downwardapi-2g5s": Phase="Running", Reason="", readiness=false. Elapsed: 10.041203475s
Jul  7 11:31:54.900: INFO: Pod "pod-subpath-test-downwardapi-2g5s": Phase="Running", Reason="", readiness=false. Elapsed: 12.045289248s
Jul  7 11:31:56.905: INFO: Pod "pod-subpath-test-downwardapi-2g5s": Phase="Running", Reason="", readiness=false. Elapsed: 14.050499291s
Jul  7 11:31:58.909: INFO: Pod "pod-subpath-test-downwardapi-2g5s": Phase="Running", Reason="", readiness=false. Elapsed: 16.054546401s
Jul  7 11:32:00.914: INFO: Pod "pod-subpath-test-downwardapi-2g5s": Phase="Running", Reason="", readiness=false. Elapsed: 18.059332973s
Jul  7 11:32:02.918: INFO: Pod "pod-subpath-test-downwardapi-2g5s": Phase="Running", Reason="", readiness=false. Elapsed: 20.063716736s
Jul  7 11:32:04.923: INFO: Pod "pod-subpath-test-downwardapi-2g5s": Phase="Running", Reason="", readiness=false. Elapsed: 22.068143028s
Jul  7 11:32:06.927: INFO: Pod "pod-subpath-test-downwardapi-2g5s": Phase="Running", Reason="", readiness=false. Elapsed: 24.072399506s
Jul  7 11:32:08.931: INFO: Pod "pod-subpath-test-downwardapi-2g5s": Phase="Running", Reason="", readiness=false. Elapsed: 26.076550307s
Jul  7 11:32:10.935: INFO: Pod "pod-subpath-test-downwardapi-2g5s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.080021832s
STEP: Saw pod success
Jul  7 11:32:10.935: INFO: Pod "pod-subpath-test-downwardapi-2g5s" satisfied condition "success or failure"
Jul  7 11:32:10.937: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-downwardapi-2g5s container test-container-subpath-downwardapi-2g5s: 
STEP: delete the pod
Jul  7 11:32:10.977: INFO: Waiting for pod pod-subpath-test-downwardapi-2g5s to disappear
Jul  7 11:32:10.998: INFO: Pod pod-subpath-test-downwardapi-2g5s no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-2g5s
Jul  7 11:32:10.998: INFO: Deleting pod "pod-subpath-test-downwardapi-2g5s" in namespace "e2e-tests-subpath-zl6bn"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:32:11.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-zl6bn" for this suite.
Jul  7 11:32:17.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:32:17.074: INFO: namespace: e2e-tests-subpath-zl6bn, resource: bindings, ignored listing per whitelist
Jul  7 11:32:17.099: INFO: namespace e2e-tests-subpath-zl6bn deletion completed in 6.094951405s

• [SLOW TEST:34.366 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:32:17.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jul  7 11:32:17.259: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-pbgwd,SelfLink:/api/v1/namespaces/e2e-tests-watch-pbgwd/configmaps/e2e-watch-test-watch-closed,UID:83085891-c045-11ea-a300-0242ac110004,ResourceVersion:597563,Generation:0,CreationTimestamp:2020-07-07 11:32:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul  7 11:32:17.259: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-pbgwd,SelfLink:/api/v1/namespaces/e2e-tests-watch-pbgwd/configmaps/e2e-watch-test-watch-closed,UID:83085891-c045-11ea-a300-0242ac110004,ResourceVersion:597564,Generation:0,CreationTimestamp:2020-07-07 11:32:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jul  7 11:32:17.276: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-pbgwd,SelfLink:/api/v1/namespaces/e2e-tests-watch-pbgwd/configmaps/e2e-watch-test-watch-closed,UID:83085891-c045-11ea-a300-0242ac110004,ResourceVersion:597565,Generation:0,CreationTimestamp:2020-07-07 11:32:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul  7 11:32:17.276: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-pbgwd,SelfLink:/api/v1/namespaces/e2e-tests-watch-pbgwd/configmaps/e2e-watch-test-watch-closed,UID:83085891-c045-11ea-a300-0242ac110004,ResourceVersion:597566,Generation:0,CreationTimestamp:2020-07-07 11:32:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:32:17.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-pbgwd" for this suite.
Jul  7 11:32:23.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:32:23.337: INFO: namespace: e2e-tests-watch-pbgwd, resource: bindings, ignored listing per whitelist
Jul  7 11:32:23.372: INFO: namespace e2e-tests-watch-pbgwd deletion completed in 6.092501285s

• [SLOW TEST:6.273 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:32:23.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-86c3c00f-c045-11ea-9ad7-0242ac11001b
STEP: Creating a pod to test consume secrets
Jul  7 11:32:23.579: INFO: Waiting up to 5m0s for pod "pod-secrets-86d63aee-c045-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-secrets-wttxq" to be "success or failure"
Jul  7 11:32:23.596: INFO: Pod "pod-secrets-86d63aee-c045-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.7122ms
Jul  7 11:32:25.599: INFO: Pod "pod-secrets-86d63aee-c045-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020025594s
Jul  7 11:32:27.602: INFO: Pod "pod-secrets-86d63aee-c045-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023378766s
Jul  7 11:32:29.607: INFO: Pod "pod-secrets-86d63aee-c045-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027816936s
STEP: Saw pod success
Jul  7 11:32:29.607: INFO: Pod "pod-secrets-86d63aee-c045-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:32:29.610: INFO: Trying to get logs from node hunter-worker pod pod-secrets-86d63aee-c045-11ea-9ad7-0242ac11001b container secret-volume-test: 
STEP: delete the pod
Jul  7 11:32:29.867: INFO: Waiting for pod pod-secrets-86d63aee-c045-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:32:29.931: INFO: Pod pod-secrets-86d63aee-c045-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:32:29.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-wttxq" for this suite.
Jul  7 11:32:37.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:32:37.994: INFO: namespace: e2e-tests-secrets-wttxq, resource: bindings, ignored listing per whitelist
Jul  7 11:32:38.027: INFO: namespace e2e-tests-secrets-wttxq deletion completed in 8.091865406s
STEP: Destroying namespace "e2e-tests-secret-namespace-l7vn6" for this suite.
Jul  7 11:32:44.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:32:44.142: INFO: namespace: e2e-tests-secret-namespace-l7vn6, resource: bindings, ignored listing per whitelist
Jul  7 11:32:44.180: INFO: namespace e2e-tests-secret-namespace-l7vn6 deletion completed in 6.153215963s

• [SLOW TEST:20.808 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:32:44.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul  7 11:32:44.276: INFO: Waiting up to 5m0s for pod "pod-932ca678-c045-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-emptydir-8fpxv" to be "success or failure"
Jul  7 11:32:44.280: INFO: Pod "pod-932ca678-c045-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.784463ms
Jul  7 11:32:46.284: INFO: Pod "pod-932ca678-c045-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007906006s
Jul  7 11:32:48.288: INFO: Pod "pod-932ca678-c045-11ea-9ad7-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.011933391s
Jul  7 11:32:50.292: INFO: Pod "pod-932ca678-c045-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015732737s
STEP: Saw pod success
Jul  7 11:32:50.292: INFO: Pod "pod-932ca678-c045-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:32:50.294: INFO: Trying to get logs from node hunter-worker pod pod-932ca678-c045-11ea-9ad7-0242ac11001b container test-container: 
STEP: delete the pod
Jul  7 11:32:50.316: INFO: Waiting for pod pod-932ca678-c045-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:32:50.320: INFO: Pod pod-932ca678-c045-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:32:50.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-8fpxv" for this suite.
Jul  7 11:32:56.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:32:56.499: INFO: namespace: e2e-tests-emptydir-8fpxv, resource: bindings, ignored listing per whitelist
Jul  7 11:32:56.592: INFO: namespace e2e-tests-emptydir-8fpxv deletion completed in 6.267933127s

• [SLOW TEST:12.410 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:32:56.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Jul  7 11:32:56.760: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-69cvz" to be "success or failure"
Jul  7 11:32:56.770: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.975649ms
Jul  7 11:32:58.774: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013929112s
Jul  7 11:33:00.778: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018436309s
Jul  7 11:33:02.782: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021965785s
STEP: Saw pod success
Jul  7 11:33:02.782: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jul  7 11:33:02.784: INFO: Trying to get logs from node hunter-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jul  7 11:33:02.993: INFO: Waiting for pod pod-host-path-test to disappear
Jul  7 11:33:03.005: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:33:03.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-69cvz" for this suite.
Jul  7 11:33:09.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:33:09.106: INFO: namespace: e2e-tests-hostpath-69cvz, resource: bindings, ignored listing per whitelist
Jul  7 11:33:09.155: INFO: namespace e2e-tests-hostpath-69cvz deletion completed in 6.147538749s

• [SLOW TEST:12.563 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:33:09.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-6ft6f
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-6ft6f
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-6ft6f
Jul  7 11:33:09.438: INFO: Found 0 stateful pods, waiting for 1
Jul  7 11:33:19.483: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jul  7 11:33:19.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  7 11:33:19.723: INFO: stderr: "I0707 11:33:19.632137    1528 log.go:172] (0xc00015c630) (0xc000693400) Create stream\nI0707 11:33:19.632198    1528 log.go:172] (0xc00015c630) (0xc000693400) Stream added, broadcasting: 1\nI0707 11:33:19.634956    1528 log.go:172] (0xc00015c630) Reply frame received for 1\nI0707 11:33:19.635016    1528 log.go:172] (0xc00015c630) (0xc0003e2000) Create stream\nI0707 11:33:19.635033    1528 log.go:172] (0xc00015c630) (0xc0003e2000) Stream added, broadcasting: 3\nI0707 11:33:19.635929    1528 log.go:172] (0xc00015c630) Reply frame received for 3\nI0707 11:33:19.635990    1528 log.go:172] (0xc00015c630) (0xc00053a000) Create stream\nI0707 11:33:19.636013    1528 log.go:172] (0xc00015c630) (0xc00053a000) Stream added, broadcasting: 5\nI0707 11:33:19.637031    1528 log.go:172] (0xc00015c630) Reply frame received for 5\nI0707 11:33:19.718327    1528 log.go:172] (0xc00015c630) Data frame received for 3\nI0707 11:33:19.718383    1528 log.go:172] (0xc0003e2000) (3) Data frame handling\nI0707 11:33:19.718414    1528 log.go:172] (0xc0003e2000) (3) Data frame sent\nI0707 11:33:19.718431    1528 log.go:172] (0xc00015c630) Data frame received for 3\nI0707 11:33:19.718441    1528 log.go:172] (0xc0003e2000) (3) Data frame handling\nI0707 11:33:19.718490    1528 log.go:172] (0xc00015c630) Data frame received for 5\nI0707 11:33:19.718502    1528 log.go:172] (0xc00053a000) (5) Data frame handling\nI0707 11:33:19.720334    1528 log.go:172] (0xc00015c630) Data frame received for 1\nI0707 11:33:19.720362    1528 log.go:172] (0xc000693400) (1) Data frame handling\nI0707 11:33:19.720376    1528 log.go:172] (0xc000693400) (1) Data frame sent\nI0707 11:33:19.720390    1528 log.go:172] (0xc00015c630) (0xc000693400) Stream removed, broadcasting: 1\nI0707 11:33:19.720435    1528 log.go:172] (0xc00015c630) Go away received\nI0707 11:33:19.720600    1528 log.go:172] (0xc00015c630) (0xc000693400) Stream removed, broadcasting: 1\nI0707 11:33:19.720618    1528 log.go:172] (0xc00015c630) (0xc0003e2000) Stream removed, broadcasting: 3\nI0707 11:33:19.720632    1528 log.go:172] (0xc00015c630) (0xc00053a000) Stream removed, broadcasting: 5\n"
Jul  7 11:33:19.723: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  7 11:33:19.723: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  7 11:33:19.727: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul  7 11:33:29.732: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  7 11:33:29.732: INFO: Waiting for statefulset status.replicas updated to 0
Jul  7 11:33:29.891: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  7 11:33:29.891: INFO: ss-0  hunter-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:09 +0000 UTC  }]
Jul  7 11:33:29.891: INFO: 
Jul  7 11:33:29.891: INFO: StatefulSet ss has not reached scale 3, at 1
Jul  7 11:33:30.897: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.851704516s
Jul  7 11:33:31.902: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.846131278s
Jul  7 11:33:32.907: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.841395109s
Jul  7 11:33:33.962: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.836463334s
Jul  7 11:33:34.968: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.780892125s
Jul  7 11:33:35.974: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.774268639s
Jul  7 11:33:36.980: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.768719944s
Jul  7 11:33:37.984: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.763064426s
Jul  7 11:33:38.990: INFO: Verifying statefulset ss doesn't scale past 3 for another 758.617904ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-6ft6f
Jul  7 11:33:39.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:33:40.235: INFO: stderr: "I0707 11:33:40.139748    1550 log.go:172] (0xc00072a4d0) (0xc0004892c0) Create stream\nI0707 11:33:40.139807    1550 log.go:172] (0xc00072a4d0) (0xc0004892c0) Stream added, broadcasting: 1\nI0707 11:33:40.142481    1550 log.go:172] (0xc00072a4d0) Reply frame received for 1\nI0707 11:33:40.142533    1550 log.go:172] (0xc00072a4d0) (0xc00051e000) Create stream\nI0707 11:33:40.142558    1550 log.go:172] (0xc00072a4d0) (0xc00051e000) Stream added, broadcasting: 3\nI0707 11:33:40.143679    1550 log.go:172] (0xc00072a4d0) Reply frame received for 3\nI0707 11:33:40.143748    1550 log.go:172] (0xc00072a4d0) (0xc00032c000) Create stream\nI0707 11:33:40.143767    1550 log.go:172] (0xc00072a4d0) (0xc00032c000) Stream added, broadcasting: 5\nI0707 11:33:40.144894    1550 log.go:172] (0xc00072a4d0) Reply frame received for 5\nI0707 11:33:40.229549    1550 log.go:172] (0xc00072a4d0) Data frame received for 5\nI0707 11:33:40.229639    1550 log.go:172] (0xc00032c000) (5) Data frame handling\nI0707 11:33:40.229696    1550 log.go:172] (0xc00072a4d0) Data frame received for 3\nI0707 11:33:40.229820    1550 log.go:172] (0xc00051e000) (3) Data frame handling\nI0707 11:33:40.229858    1550 log.go:172] (0xc00051e000) (3) Data frame sent\nI0707 11:33:40.229871    1550 log.go:172] (0xc00072a4d0) Data frame received for 3\nI0707 11:33:40.229887    1550 log.go:172] (0xc00051e000) (3) Data frame handling\nI0707 11:33:40.231390    1550 log.go:172] (0xc00072a4d0) Data frame received for 1\nI0707 11:33:40.231426    1550 log.go:172] (0xc0004892c0) (1) Data frame handling\nI0707 11:33:40.231457    1550 log.go:172] (0xc0004892c0) (1) Data frame sent\nI0707 11:33:40.231482    1550 log.go:172] (0xc00072a4d0) (0xc0004892c0) Stream removed, broadcasting: 1\nI0707 11:33:40.231508    1550 log.go:172] (0xc00072a4d0) Go away received\nI0707 11:33:40.231752    1550 log.go:172] (0xc00072a4d0) (0xc0004892c0) Stream removed, broadcasting: 1\nI0707 11:33:40.231782    1550 log.go:172] (0xc00072a4d0) (0xc00051e000) Stream removed, broadcasting: 3\nI0707 11:33:40.231797    1550 log.go:172] (0xc00072a4d0) (0xc00032c000) Stream removed, broadcasting: 5\n"
Jul  7 11:33:40.235: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  7 11:33:40.235: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  7 11:33:40.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:33:40.449: INFO: stderr: "I0707 11:33:40.372376    1574 log.go:172] (0xc0008342c0) (0xc00071e640) Create stream\nI0707 11:33:40.372462    1574 log.go:172] (0xc0008342c0) (0xc00071e640) Stream added, broadcasting: 1\nI0707 11:33:40.375449    1574 log.go:172] (0xc0008342c0) Reply frame received for 1\nI0707 11:33:40.375485    1574 log.go:172] (0xc0008342c0) (0xc00064cd20) Create stream\nI0707 11:33:40.375495    1574 log.go:172] (0xc0008342c0) (0xc00064cd20) Stream added, broadcasting: 3\nI0707 11:33:40.376331    1574 log.go:172] (0xc0008342c0) Reply frame received for 3\nI0707 11:33:40.376368    1574 log.go:172] (0xc0008342c0) (0xc00064ce60) Create stream\nI0707 11:33:40.376381    1574 log.go:172] (0xc0008342c0) (0xc00064ce60) Stream added, broadcasting: 5\nI0707 11:33:40.377437    1574 log.go:172] (0xc0008342c0) Reply frame received for 5\nI0707 11:33:40.442744    1574 log.go:172] (0xc0008342c0) Data frame received for 5\nI0707 11:33:40.442804    1574 log.go:172] (0xc00064ce60) (5) Data frame handling\nI0707 11:33:40.442827    1574 log.go:172] (0xc00064ce60) (5) Data frame sent\nI0707 11:33:40.442844    1574 log.go:172] (0xc0008342c0) Data frame received for 5\nI0707 11:33:40.442864    1574 log.go:172] (0xc00064ce60) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0707 11:33:40.442890    1574 log.go:172] (0xc0008342c0) Data frame received for 3\nI0707 11:33:40.442922    1574 log.go:172] (0xc00064cd20) (3) Data frame handling\nI0707 11:33:40.442959    1574 log.go:172] (0xc00064cd20) (3) Data frame sent\nI0707 11:33:40.442972    1574 log.go:172] (0xc0008342c0) Data frame received for 3\nI0707 11:33:40.442983    1574 log.go:172] (0xc00064cd20) (3) Data frame handling\nI0707 11:33:40.444179    1574 log.go:172] (0xc0008342c0) Data frame received for 1\nI0707 11:33:40.444196    1574 log.go:172] (0xc00071e640) (1) Data frame handling\nI0707 11:33:40.444205    1574 log.go:172] (0xc00071e640) (1) Data frame sent\nI0707 11:33:40.444213    1574 log.go:172] (0xc0008342c0) (0xc00071e640) Stream removed, broadcasting: 1\nI0707 11:33:40.444222    1574 log.go:172] (0xc0008342c0) Go away received\nI0707 11:33:40.444561    1574 log.go:172] (0xc0008342c0) (0xc00071e640) Stream removed, broadcasting: 1\nI0707 11:33:40.444593    1574 log.go:172] (0xc0008342c0) (0xc00064cd20) Stream removed, broadcasting: 3\nI0707 11:33:40.444609    1574 log.go:172] (0xc0008342c0) (0xc00064ce60) Stream removed, broadcasting: 5\n"
Jul  7 11:33:40.449: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  7 11:33:40.449: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  7 11:33:40.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:33:40.651: INFO: stderr: "I0707 11:33:40.572807    1597 log.go:172] (0xc0008262c0) (0xc000667360) Create stream\nI0707 11:33:40.572861    1597 log.go:172] (0xc0008262c0) (0xc000667360) Stream added, broadcasting: 1\nI0707 11:33:40.575797    1597 log.go:172] (0xc0008262c0) Reply frame received for 1\nI0707 11:33:40.575843    1597 log.go:172] (0xc0008262c0) (0xc000510000) Create stream\nI0707 11:33:40.575857    1597 log.go:172] (0xc0008262c0) (0xc000510000) Stream added, broadcasting: 3\nI0707 11:33:40.576800    1597 log.go:172] (0xc0008262c0) Reply frame received for 3\nI0707 11:33:40.576822    1597 log.go:172] (0xc0008262c0) (0xc000667400) Create stream\nI0707 11:33:40.576831    1597 log.go:172] (0xc0008262c0) (0xc000667400) Stream added, broadcasting: 5\nI0707 11:33:40.578178    1597 log.go:172] (0xc0008262c0) Reply frame received for 5\nI0707 11:33:40.644034    1597 log.go:172] (0xc0008262c0) Data frame received for 3\nI0707 11:33:40.644102    1597 log.go:172] (0xc000510000) (3) Data frame handling\nI0707 11:33:40.644129    1597 log.go:172] (0xc000510000) (3) Data frame sent\nI0707 11:33:40.644152    1597 log.go:172] (0xc0008262c0) Data frame received for 3\nI0707 11:33:40.644169    1597 log.go:172] (0xc000510000) (3) Data frame handling\nI0707 11:33:40.644440    1597 log.go:172] (0xc0008262c0) Data frame received for 5\nI0707 11:33:40.644492    1597 log.go:172] (0xc000667400) (5) Data frame handling\nI0707 11:33:40.644530    1597 log.go:172] (0xc000667400) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0707 11:33:40.644553    1597 log.go:172] (0xc0008262c0) Data frame received for 5\nI0707 11:33:40.644601    1597 log.go:172] (0xc000667400) (5) Data frame handling\nI0707 11:33:40.646521    1597 log.go:172] (0xc0008262c0) Data frame received for 1\nI0707 11:33:40.646563    1597 log.go:172] (0xc000667360) (1) Data frame handling\nI0707 11:33:40.646592    1597 log.go:172] (0xc000667360) (1) Data frame sent\nI0707 11:33:40.646631    1597 log.go:172] (0xc0008262c0) (0xc000667360) Stream removed, broadcasting: 1\nI0707 11:33:40.646662    1597 log.go:172] (0xc0008262c0) Go away received\nI0707 11:33:40.646904    1597 log.go:172] (0xc0008262c0) (0xc000667360) Stream removed, broadcasting: 1\nI0707 11:33:40.646931    1597 log.go:172] (0xc0008262c0) (0xc000510000) Stream removed, broadcasting: 3\nI0707 11:33:40.646946    1597 log.go:172] (0xc0008262c0) (0xc000667400) Stream removed, broadcasting: 5\n"
Jul  7 11:33:40.651: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  7 11:33:40.651: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  7 11:33:40.655: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 11:33:40.655: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 11:33:40.655: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jul  7 11:33:40.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  7 11:33:40.869: INFO: stderr: "I0707 11:33:40.785815    1620 log.go:172] (0xc000728160) (0xc0008066e0) Create stream\nI0707 11:33:40.785865    1620 log.go:172] (0xc000728160) (0xc0008066e0) Stream added, broadcasting: 1\nI0707 11:33:40.788200    1620 log.go:172] (0xc000728160) Reply frame received for 1\nI0707 11:33:40.788263    1620 log.go:172] (0xc000728160) (0xc0003c8c80) Create stream\nI0707 11:33:40.788279    1620 log.go:172] (0xc000728160) (0xc0003c8c80) Stream added, broadcasting: 3\nI0707 11:33:40.788982    1620 log.go:172] (0xc000728160) Reply frame received for 3\nI0707 11:33:40.789021    1620 log.go:172] (0xc000728160) (0xc000806780) Create stream\nI0707 11:33:40.789037    1620 log.go:172] (0xc000728160) (0xc000806780) Stream added, broadcasting: 5\nI0707 11:33:40.790075    1620 log.go:172] (0xc000728160) Reply frame received for 5\nI0707 11:33:40.862436    1620 log.go:172] (0xc000728160) Data frame received for 3\nI0707 11:33:40.862470    1620 log.go:172] (0xc0003c8c80) (3) Data frame handling\nI0707 11:33:40.862482    1620 log.go:172] (0xc0003c8c80) (3) Data frame sent\nI0707 11:33:40.862576    1620 log.go:172] (0xc000728160) Data frame received for 5\nI0707 11:33:40.862657    1620 log.go:172] (0xc000806780) (5) Data frame handling\nI0707 11:33:40.862703    1620 log.go:172] (0xc000728160) Data frame received for 3\nI0707 11:33:40.862729    1620 log.go:172] (0xc0003c8c80) (3) Data frame handling\nI0707 11:33:40.864621    1620 log.go:172] (0xc000728160) Data frame received for 1\nI0707 11:33:40.864655    1620 log.go:172] (0xc0008066e0) (1) Data frame handling\nI0707 11:33:40.864692    1620 log.go:172] (0xc0008066e0) (1) Data frame sent\nI0707 11:33:40.864722    1620 log.go:172] (0xc000728160) (0xc0008066e0) Stream removed, broadcasting: 1\nI0707 11:33:40.864739    1620 log.go:172] (0xc000728160) Go away received\nI0707 11:33:40.865020    1620 log.go:172] (0xc000728160) (0xc0008066e0) Stream removed, broadcasting: 1\nI0707 11:33:40.865056    1620 log.go:172] (0xc000728160) (0xc0003c8c80) Stream removed, broadcasting: 3\nI0707 11:33:40.865075    1620 log.go:172] (0xc000728160) (0xc000806780) Stream removed, broadcasting: 5\n"
Jul  7 11:33:40.869: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  7 11:33:40.869: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  7 11:33:40.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  7 11:33:41.169: INFO: stderr: "I0707 11:33:41.066673    1642 log.go:172] (0xc00082c2c0) (0xc000728640) Create stream\nI0707 11:33:41.066759    1642 log.go:172] (0xc00082c2c0) (0xc000728640) Stream added, broadcasting: 1\nI0707 11:33:41.069108    1642 log.go:172] (0xc00082c2c0) Reply frame received for 1\nI0707 11:33:41.069313    1642 log.go:172] (0xc00082c2c0) (0xc00061cd20) Create stream\nI0707 11:33:41.069325    1642 log.go:172] (0xc00082c2c0) (0xc00061cd20) Stream added, broadcasting: 3\nI0707 11:33:41.070199    1642 log.go:172] (0xc00082c2c0) Reply frame received for 3\nI0707 11:33:41.070250    1642 log.go:172] (0xc00082c2c0) (0xc0007286e0) Create stream\nI0707 11:33:41.070268    1642 log.go:172] (0xc00082c2c0) (0xc0007286e0) Stream added, broadcasting: 5\nI0707 11:33:41.070901    1642 log.go:172] (0xc00082c2c0) Reply frame received for 5\nI0707 11:33:41.161425    1642 log.go:172] (0xc00082c2c0) Data frame received for 3\nI0707 11:33:41.161555    1642 log.go:172] (0xc00061cd20) (3) Data frame handling\nI0707 11:33:41.161613    1642 log.go:172] (0xc00061cd20) (3) Data frame sent\nI0707 11:33:41.161944    1642 log.go:172] (0xc00082c2c0) Data frame received for 3\nI0707 11:33:41.161989    1642 log.go:172] (0xc00082c2c0) Data frame received for 5\nI0707 11:33:41.162013    1642 log.go:172] (0xc0007286e0) (5) Data frame handling\nI0707 11:33:41.162177    1642 log.go:172] (0xc00061cd20) (3) Data frame handling\nI0707 11:33:41.164443    1642 log.go:172] (0xc00082c2c0) Data frame received for 1\nI0707 11:33:41.164474    1642 log.go:172] (0xc000728640) (1) Data frame handling\nI0707 11:33:41.164493    1642 log.go:172] (0xc000728640) (1) Data frame sent\nI0707 11:33:41.164523    1642 log.go:172] (0xc00082c2c0) (0xc000728640) Stream removed, broadcasting: 1\nI0707 11:33:41.164534    1642 log.go:172] (0xc00082c2c0) Go away received\nI0707 11:33:41.164827    1642 log.go:172] (0xc00082c2c0) (0xc000728640) Stream removed, broadcasting: 1\nI0707 11:33:41.164869    1642 log.go:172] (0xc00082c2c0) (0xc00061cd20) Stream removed, broadcasting: 3\nI0707 11:33:41.164899    1642 log.go:172] (0xc00082c2c0) (0xc0007286e0) Stream removed, broadcasting: 5\n"
Jul  7 11:33:41.169: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  7 11:33:41.169: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  7 11:33:41.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  7 11:33:41.501: INFO: stderr: "I0707 11:33:41.397673    1664 log.go:172] (0xc000154840) (0xc0006712c0) Create stream\nI0707 11:33:41.397749    1664 log.go:172] (0xc000154840) (0xc0006712c0) Stream added, broadcasting: 1\nI0707 11:33:41.400158    1664 log.go:172] (0xc000154840) Reply frame received for 1\nI0707 11:33:41.400192    1664 log.go:172] (0xc000154840) (0xc000690000) Create stream\nI0707 11:33:41.400203    1664 log.go:172] (0xc000154840) (0xc000690000) Stream added, broadcasting: 3\nI0707 11:33:41.400999    1664 log.go:172] (0xc000154840) Reply frame received for 3\nI0707 11:33:41.401037    1664 log.go:172] (0xc000154840) (0xc00078a000) Create stream\nI0707 11:33:41.401049    1664 log.go:172] (0xc000154840) (0xc00078a000) Stream added, broadcasting: 5\nI0707 11:33:41.401911    1664 log.go:172] (0xc000154840) Reply frame received for 5\nI0707 11:33:41.494822    1664 log.go:172] (0xc000154840) Data frame received for 3\nI0707 11:33:41.494896    1664 log.go:172] (0xc000690000) (3) Data frame handling\nI0707 11:33:41.494920    1664 log.go:172] (0xc000690000) (3) Data frame sent\nI0707 11:33:41.494937    1664 log.go:172] (0xc000154840) Data frame received for 3\nI0707 11:33:41.494951    1664 log.go:172] (0xc000690000) (3) Data frame handling\nI0707 11:33:41.494998    1664 log.go:172] (0xc000154840) Data frame received for 5\nI0707 11:33:41.495037    1664 log.go:172] (0xc00078a000) (5) Data frame handling\nI0707 11:33:41.496330    1664 log.go:172] (0xc000154840) Data frame received for 1\nI0707 11:33:41.496352    1664 log.go:172] (0xc0006712c0) (1) Data frame handling\nI0707 11:33:41.496372    1664 log.go:172] (0xc0006712c0) (1) Data frame sent\nI0707 11:33:41.496389    1664 log.go:172] (0xc000154840) (0xc0006712c0) Stream removed, broadcasting: 1\nI0707 11:33:41.496457    1664 log.go:172] (0xc000154840) Go away received\nI0707 11:33:41.496583    1664 log.go:172] (0xc000154840) (0xc0006712c0) Stream removed, broadcasting: 1\nI0707 11:33:41.496608    1664 log.go:172] (0xc000154840) (0xc000690000) Stream removed, broadcasting: 3\nI0707 11:33:41.496621    1664 log.go:172] (0xc000154840) (0xc00078a000) Stream removed, broadcasting: 5\n"
Jul  7 11:33:41.501: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  7 11:33:41.501: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  7 11:33:41.501: INFO: Waiting for statefulset status.replicas updated to 0
Jul  7 11:33:41.530: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jul  7 11:33:51.539: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  7 11:33:51.539: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul  7 11:33:51.539: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul  7 11:33:51.556: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul  7 11:33:51.556: INFO: ss-0  hunter-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:09 +0000 UTC  }]
Jul  7 11:33:51.556: INFO: ss-1  hunter-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  }]
Jul  7 11:33:51.556: INFO: ss-2  hunter-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  }]
Jul  7 11:33:51.556: INFO: 
Jul  7 11:33:51.556: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  7 11:33:52.579: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul  7 11:33:52.579: INFO: ss-0  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:09 +0000 UTC  }]
Jul  7 11:33:52.579: INFO: ss-1  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  }]
Jul  7 11:33:52.579: INFO: ss-2  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  }]
Jul  7 11:33:52.579: INFO: 
Jul  7 11:33:52.579: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  7 11:33:53.651: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul  7 11:33:53.652: INFO: ss-0  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:09 +0000 UTC  }]
Jul  7 11:33:53.652: INFO: ss-1  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  }]
Jul  7 11:33:53.652: INFO: ss-2  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  }]
Jul  7 11:33:53.652: INFO: 
Jul  7 11:33:53.652: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  7 11:33:54.657: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul  7 11:33:54.657: INFO: ss-1  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  }]
Jul  7 11:33:54.657: INFO: ss-2  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  }]
Jul  7 11:33:54.657: INFO: 
Jul  7 11:33:54.657: INFO: StatefulSet ss has not reached scale 0, at 2
Jul  7 11:33:55.662: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul  7 11:33:55.662: INFO: ss-1  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  }]
Jul  7 11:33:55.662: INFO: ss-2  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  }]
Jul  7 11:33:55.662: INFO: 
Jul  7 11:33:55.662: INFO: StatefulSet ss has not reached scale 0, at 2
Jul  7 11:33:56.667: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul  7 11:33:56.667: INFO: ss-1  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  }]
Jul  7 11:33:56.667: INFO: ss-2  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  }]
Jul  7 11:33:56.667: INFO: 
Jul  7 11:33:56.667: INFO: StatefulSet ss has not reached scale 0, at 2
Jul  7 11:33:57.671: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul  7 11:33:57.671: INFO: ss-1  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  }]
Jul  7 11:33:57.672: INFO: ss-2  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  }]
Jul  7 11:33:57.672: INFO: 
Jul  7 11:33:57.672: INFO: StatefulSet ss has not reached scale 0, at 2
Jul  7 11:33:58.677: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul  7 11:33:58.677: INFO: ss-1  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  }]
Jul  7 11:33:58.677: INFO: ss-2  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  }]
Jul  7 11:33:58.677: INFO: 
Jul  7 11:33:58.677: INFO: StatefulSet ss has not reached scale 0, at 2
Jul  7 11:33:59.682: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul  7 11:33:59.682: INFO: ss-1  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  }]
Jul  7 11:33:59.682: INFO: ss-2  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  }]
Jul  7 11:33:59.682: INFO: 
Jul  7 11:33:59.682: INFO: StatefulSet ss has not reached scale 0, at 2
Jul  7 11:34:00.687: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul  7 11:34:00.687: INFO: ss-1  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  }]
Jul  7 11:34:00.687: INFO: ss-2  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:33:29 +0000 UTC  }]
Jul  7 11:34:00.687: INFO: 
Jul  7 11:34:00.687: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-6ft6f
Jul  7 11:34:01.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:34:01.816: INFO: rc: 1
Jul  7 11:34:01.816: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001db9260 exit status 1   true [0xc0015e6e40 0xc0015e6e58 0xc0015e6e70] [0xc0015e6e40 0xc0015e6e58 0xc0015e6e70] [0xc0015e6e50 0xc0015e6e68] [0x935700 0x935700] 0xc001d0e780 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jul  7 11:34:11.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:34:11.908: INFO: rc: 1
Jul  7 11:34:11.908: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00182d2c0 exit status 1   true [0xc000cde360 0xc000cde380 0xc000cde398] [0xc000cde360 0xc000cde380 0xc000cde398] [0xc000cde370 0xc000cde390] [0x935700 0x935700] 0xc001aa6de0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:34:21.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:34:22.000: INFO: rc: 1
Jul  7 11:34:22.000: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00182d3e0 exit status 1   true [0xc000cde3a0 0xc000cde3b8 0xc000cde3d0] [0xc000cde3a0 0xc000cde3b8 0xc000cde3d0] [0xc000cde3b0 0xc000cde3c8] [0x935700 0x935700] 0xc001aa7080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:34:32.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:34:32.087: INFO: rc: 1
Jul  7 11:34:32.087: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000d8e120 exit status 1   true [0xc00000e1f8 0xc0004f6c20 0xc0004f6df0] [0xc00000e1f8 0xc0004f6c20 0xc0004f6df0] [0xc0004f6b60 0xc0004f6d28] [0x935700 0x935700] 0xc0025c61e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:34:42.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:34:42.187: INFO: rc: 1
Jul  7 11:34:42.187: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000d8e240 exit status 1   true [0xc0004f6e78 0xc0004f7008 0xc0004f7110] [0xc0004f6e78 0xc0004f7008 0xc0004f7110] [0xc0004f6fd8 0xc0004f70f0] [0x935700 0x935700] 0xc0025c6480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:34:52.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:34:52.275: INFO: rc: 1
Jul  7 11:34:52.275: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000d8e390 exit status 1   true [0xc0004f7130 0xc0004f7270 0xc0004f7300] [0xc0004f7130 0xc0004f7270 0xc0004f7300] [0xc0004f7218 0xc0004f72e8] [0x935700 0x935700] 0xc0025c6720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:35:02.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:35:02.367: INFO: rc: 1
Jul  7 11:35:02.367: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000d8e4e0 exit status 1   true [0xc0004f7330 0xc0004f7440 0xc0004f74b8] [0xc0004f7330 0xc0004f7440 0xc0004f74b8] [0xc0004f7398 0xc0004f7460] [0x935700 0x935700] 0xc0025c69c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:35:12.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:35:12.470: INFO: rc: 1
Jul  7 11:35:12.470: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00170c120 exit status 1   true [0xc0004de088 0xc0004de1c8 0xc0004de208] [0xc0004de088 0xc0004de1c8 0xc0004de208] [0xc0004de1a0 0xc0004de200] [0x935700 0x935700] 0xc0025063c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:35:22.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:35:22.556: INFO: rc: 1
Jul  7 11:35:22.556: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001054120 exit status 1   true [0xc002752000 0xc002752018 0xc002752030] [0xc002752000 0xc002752018 0xc002752030] [0xc002752010 0xc002752028] [0x935700 0x935700] 0xc0021681e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:35:32.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:35:32.708: INFO: rc: 1
Jul  7 11:35:32.708: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00170c270 exit status 1   true [0xc0004de210 0xc0004de258 0xc0004de2b8] [0xc0004de210 0xc0004de258 0xc0004de2b8] [0xc0004de228 0xc0004de2b0] [0x935700 0x935700] 0xc002506660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:35:42.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:35:42.812: INFO: rc: 1
Jul  7 11:35:42.812: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000d8e660 exit status 1   true [0xc0004f7520 0xc0004f7540 0xc0004f7638] [0xc0004f7520 0xc0004f7540 0xc0004f7638] [0xc0004f7538 0xc0004f7630] [0x935700 0x935700] 0xc0025c6c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:35:52.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:35:52.904: INFO: rc: 1
Jul  7 11:35:52.904: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00170c4e0 exit status 1   true [0xc0004de2d8 0xc0004de340 0xc0004de3f0] [0xc0004de2d8 0xc0004de340 0xc0004de3f0] [0xc0004de338 0xc0004de3a0] [0x935700 0x935700] 0xc002506900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:36:02.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:36:03.000: INFO: rc: 1
Jul  7 11:36:03.000: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000d8e7b0 exit status 1   true [0xc0004f7640 0xc0004f76f8 0xc0004f7770] [0xc0004f7640 0xc0004f76f8 0xc0004f7770] [0xc0004f76d8 0xc0004f7738] [0x935700 0x935700] 0xc001b9ea20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:36:13.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:36:13.081: INFO: rc: 1
Jul  7 11:36:13.081: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000d8e8d0 exit status 1   true [0xc0004f77c0 0xc0004f7828 0xc0004f7930] [0xc0004f77c0 0xc0004f7828 0xc0004f7930] [0xc0004f7808 0xc0004f78a8] [0x935700 0x935700] 0xc001b9ecc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:36:23.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:36:23.171: INFO: rc: 1
Jul  7 11:36:23.171: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00170c6c0 exit status 1   true [0xc0004de410 0xc0004de4a8 0xc0004de548] [0xc0004de410 0xc0004de4a8 0xc0004de548] [0xc0004de450 0xc0004de4f0] [0x935700 0x935700] 0xc002506ba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:36:33.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:36:33.261: INFO: rc: 1
Jul  7 11:36:33.261: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001054150 exit status 1   true [0xc00000e100 0xc002752008 0xc002752020] [0xc00000e100 0xc002752008 0xc002752020] [0xc002752000 0xc002752018] [0x935700 0x935700] 0xc0025c61e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:36:43.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:36:43.348: INFO: rc: 1
Jul  7 11:36:43.348: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000d8e150 exit status 1   true [0xc0004f6b60 0xc0004f6d28 0xc0004f6ee8] [0xc0004f6b60 0xc0004f6d28 0xc0004f6ee8] [0xc0004f6ca8 0xc0004f6e78] [0x935700 0x935700] 0xc0021681e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:36:53.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:36:53.429: INFO: rc: 1
Jul  7 11:36:53.429: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000d8e2a0 exit status 1   true [0xc0004f6fd8 0xc0004f70f0 0xc0004f71b8] [0xc0004f6fd8 0xc0004f70f0 0xc0004f71b8] [0xc0004f7010 0xc0004f7130] [0x935700 0x935700] 0xc002168480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:37:03.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:37:03.532: INFO: rc: 1
Jul  7 11:37:03.532: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001054270 exit status 1   true [0xc002752028 0xc002752040 0xc002752058] [0xc002752028 0xc002752040 0xc002752058] [0xc002752038 0xc002752050] [0x935700 0x935700] 0xc0025c6480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:37:13.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:37:13.635: INFO: rc: 1
Jul  7 11:37:13.635: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001054390 exit status 1   true [0xc002752060 0xc002752078 0xc002752090] [0xc002752060 0xc002752078 0xc002752090] [0xc002752070 0xc002752088] [0x935700 0x935700] 0xc0025c6720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:37:23.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:37:23.732: INFO: rc: 1
Jul  7 11:37:23.732: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001054540 exit status 1   true [0xc002752098 0xc0027520b0 0xc0027520c8] [0xc002752098 0xc0027520b0 0xc0027520c8] [0xc0027520a8 0xc0027520c0] [0x935700 0x935700] 0xc0025c69c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:37:33.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:37:33.825: INFO: rc: 1
Jul  7 11:37:33.825: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc000d8e3f0 exit status 1   true [0xc0004f7218 0xc0004f72e8 0xc0004f7388] [0xc0004f7218 0xc0004f72e8 0xc0004f7388] [0xc0004f72c8 0xc0004f7330] [0x935700 0x935700] 0xc002168720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:37:43.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:37:43.927: INFO: rc: 1
Jul  7 11:37:43.927: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0017c6240 exit status 1   true [0xc0004de088 0xc0004de1c8 0xc0004de208] [0xc0004de088 0xc0004de1c8 0xc0004de208] [0xc0004de1a0 0xc0004de200] [0x935700 0x935700] 0xc001b9eae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:37:53.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:37:54.015: INFO: rc: 1
Jul  7 11:37:54.015: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00170c180 exit status 1   true [0xc001940000 0xc001940040 0xc001940058] [0xc001940000 0xc001940040 0xc001940058] [0xc001940028 0xc001940050] [0x935700 0x935700] 0xc0025063c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:38:04.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:38:04.104: INFO: rc: 1
Jul  7 11:38:04.104: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00170c300 exit status 1   true [0xc001940060 0xc001940078 0xc001940090] [0xc001940060 0xc001940078 0xc001940090] [0xc001940070 0xc001940088] [0x935700 0x935700] 0xc002506660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:38:14.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:38:14.194: INFO: rc: 1
Jul  7 11:38:14.194: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00170c600 exit status 1   true [0xc001940098 0xc0019400b0 0xc0019400c8] [0xc001940098 0xc0019400b0 0xc0019400c8] [0xc0019400a8 0xc0019400c0] [0x935700 0x935700] 0xc002506900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:38:24.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:38:24.280: INFO: rc: 1
Jul  7 11:38:24.280: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001054690 exit status 1   true [0xc0027520d0 0xc0027520e8 0xc002752100] [0xc0027520d0 0xc0027520e8 0xc002752100] [0xc0027520e0 0xc0027520f8] [0x935700 0x935700] 0xc0025c6c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:38:34.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:38:34.368: INFO: rc: 1
Jul  7 11:38:34.369: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00170c8a0 exit status 1   true [0xc0019400d8 0xc0019400f0 0xc001940108] [0xc0019400d8 0xc0019400f0 0xc001940108] [0xc0019400e8 0xc001940100] [0x935700 0x935700] 0xc002506ba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:38:44.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:38:44.689: INFO: rc: 1
Jul  7 11:38:44.689: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00170c120 exit status 1   true [0xc00000e100 0xc0004f6b60 0xc0004f6d28] [0xc00000e100 0xc0004f6b60 0xc0004f6d28] [0xc00016e000 0xc0004f6ca8] [0x935700 0x935700] 0xc0021681e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:38:54.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:38:54.787: INFO: rc: 1
Jul  7 11:38:54.788: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0017c6180 exit status 1   true [0xc001940000 0xc001940040 0xc001940058] [0xc001940000 0xc001940040 0xc001940058] [0xc001940028 0xc001940050] [0x935700 0x935700] 0xc0025063c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Jul  7 11:39:04.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-6ft6f ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 11:39:04.928: INFO: rc: 1
Jul  7 11:39:04.928: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: 
Jul  7 11:39:04.928: INFO: Scaling statefulset ss to 0
Jul  7 11:39:04.939: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul  7 11:39:04.941: INFO: Deleting all statefulset in ns e2e-tests-statefulset-6ft6f
Jul  7 11:39:04.943: INFO: Scaling statefulset ss to 0
Jul  7 11:39:04.951: INFO: Waiting for statefulset status.replicas updated to 0
Jul  7 11:39:04.953: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:39:04.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-6ft6f" for this suite.
Jul  7 11:39:11.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:39:11.194: INFO: namespace: e2e-tests-statefulset-6ft6f, resource: bindings, ignored listing per whitelist
Jul  7 11:39:11.208: INFO: namespace e2e-tests-statefulset-6ft6f deletion completed in 6.206329798s

• [SLOW TEST:362.053 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:39:11.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-79e1d851-c046-11ea-9ad7-0242ac11001b
STEP: Creating configMap with name cm-test-opt-upd-79e1d8f2-c046-11ea-9ad7-0242ac11001b
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-79e1d851-c046-11ea-9ad7-0242ac11001b
STEP: Updating configmap cm-test-opt-upd-79e1d8f2-c046-11ea-9ad7-0242ac11001b
STEP: Creating configMap with name cm-test-opt-create-79e1d954-c046-11ea-9ad7-0242ac11001b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:40:49.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-d9mkv" for this suite.
Jul  7 11:41:11.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:41:12.011: INFO: namespace: e2e-tests-configmap-d9mkv, resource: bindings, ignored listing per whitelist
Jul  7 11:41:12.067: INFO: namespace e2e-tests-configmap-d9mkv deletion completed in 22.09077414s

• [SLOW TEST:120.858 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:41:12.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  7 11:41:12.232: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1eef5ef-c046-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-lwkc8" to be "success or failure"
Jul  7 11:41:12.245: INFO: Pod "downwardapi-volume-c1eef5ef-c046-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.176845ms
Jul  7 11:41:14.249: INFO: Pod "downwardapi-volume-c1eef5ef-c046-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017035229s
Jul  7 11:41:16.254: INFO: Pod "downwardapi-volume-c1eef5ef-c046-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021709463s
STEP: Saw pod success
Jul  7 11:41:16.254: INFO: Pod "downwardapi-volume-c1eef5ef-c046-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:41:16.257: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-c1eef5ef-c046-11ea-9ad7-0242ac11001b container client-container: 
STEP: delete the pod
Jul  7 11:41:16.283: INFO: Waiting for pod downwardapi-volume-c1eef5ef-c046-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:41:16.397: INFO: Pod downwardapi-volume-c1eef5ef-c046-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:41:16.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lwkc8" for this suite.
Jul  7 11:41:22.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:41:22.495: INFO: namespace: e2e-tests-projected-lwkc8, resource: bindings, ignored listing per whitelist
Jul  7 11:41:22.523: INFO: namespace e2e-tests-projected-lwkc8 deletion completed in 6.121022965s

• [SLOW TEST:10.456 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:41:22.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  7 11:41:22.665: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c828f7d6-c046-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-downward-api-9x5rc" to be "success or failure"
Jul  7 11:41:22.671: INFO: Pod "downwardapi-volume-c828f7d6-c046-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.546054ms
Jul  7 11:41:24.738: INFO: Pod "downwardapi-volume-c828f7d6-c046-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072432683s
Jul  7 11:41:26.742: INFO: Pod "downwardapi-volume-c828f7d6-c046-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076830727s
STEP: Saw pod success
Jul  7 11:41:26.742: INFO: Pod "downwardapi-volume-c828f7d6-c046-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:41:26.745: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-c828f7d6-c046-11ea-9ad7-0242ac11001b container client-container: 
STEP: delete the pod
Jul  7 11:41:26.786: INFO: Waiting for pod downwardapi-volume-c828f7d6-c046-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:41:26.796: INFO: Pod downwardapi-volume-c828f7d6-c046-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:41:26.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9x5rc" for this suite.
Jul  7 11:41:32.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:41:32.868: INFO: namespace: e2e-tests-downward-api-9x5rc, resource: bindings, ignored listing per whitelist
Jul  7 11:41:32.877: INFO: namespace e2e-tests-downward-api-9x5rc deletion completed in 6.076792873s

• [SLOW TEST:10.354 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:41:32.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0707 11:41:44.179423       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  7 11:41:44.179: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:41:44.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-ssm2p" for this suite.
Jul  7 11:41:56.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:41:56.846: INFO: namespace: e2e-tests-gc-ssm2p, resource: bindings, ignored listing per whitelist
Jul  7 11:41:56.874: INFO: namespace e2e-tests-gc-ssm2p deletion completed in 12.69177997s

• [SLOW TEST:23.996 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:41:56.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-dca9cae2-c046-11ea-9ad7-0242ac11001b
STEP: Creating a pod to test consume configMaps
Jul  7 11:41:57.093: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dcaa4c6d-c046-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-8s2gm" to be "success or failure"
Jul  7 11:41:57.133: INFO: Pod "pod-projected-configmaps-dcaa4c6d-c046-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 40.012633ms
Jul  7 11:41:59.137: INFO: Pod "pod-projected-configmaps-dcaa4c6d-c046-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044358582s
Jul  7 11:42:01.141: INFO: Pod "pod-projected-configmaps-dcaa4c6d-c046-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048427182s
Jul  7 11:42:03.146: INFO: Pod "pod-projected-configmaps-dcaa4c6d-c046-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052946252s
STEP: Saw pod success
Jul  7 11:42:03.146: INFO: Pod "pod-projected-configmaps-dcaa4c6d-c046-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:42:03.149: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-dcaa4c6d-c046-11ea-9ad7-0242ac11001b container projected-configmap-volume-test: 
STEP: delete the pod
Jul  7 11:42:03.205: INFO: Waiting for pod pod-projected-configmaps-dcaa4c6d-c046-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:42:03.228: INFO: Pod pod-projected-configmaps-dcaa4c6d-c046-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:42:03.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8s2gm" for this suite.
Jul  7 11:42:09.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:42:09.349: INFO: namespace: e2e-tests-projected-8s2gm, resource: bindings, ignored listing per whitelist
Jul  7 11:42:09.367: INFO: namespace e2e-tests-projected-8s2gm deletion completed in 6.134722382s

• [SLOW TEST:12.493 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:42:09.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jul  7 11:42:09.655: INFO: Waiting up to 5m0s for pod "var-expansion-e42b39c5-c046-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-var-expansion-fh6nt" to be "success or failure"
Jul  7 11:42:09.690: INFO: Pod "var-expansion-e42b39c5-c046-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 35.476118ms
Jul  7 11:42:11.695: INFO: Pod "var-expansion-e42b39c5-c046-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04004945s
Jul  7 11:42:13.762: INFO: Pod "var-expansion-e42b39c5-c046-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107507596s
STEP: Saw pod success
Jul  7 11:42:13.762: INFO: Pod "var-expansion-e42b39c5-c046-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:42:13.765: INFO: Trying to get logs from node hunter-worker pod var-expansion-e42b39c5-c046-11ea-9ad7-0242ac11001b container dapi-container: 
STEP: delete the pod
Jul  7 11:42:13.830: INFO: Waiting for pod var-expansion-e42b39c5-c046-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:42:13.960: INFO: Pod var-expansion-e42b39c5-c046-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:42:13.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-fh6nt" for this suite.
Jul  7 11:42:20.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:42:20.320: INFO: namespace: e2e-tests-var-expansion-fh6nt, resource: bindings, ignored listing per whitelist
Jul  7 11:42:20.324: INFO: namespace e2e-tests-var-expansion-fh6nt deletion completed in 6.360158575s

• [SLOW TEST:10.957 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:42:20.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0707 11:42:21.526331       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  7 11:42:21.526: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:42:21.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-n8fw6" for this suite.
Jul  7 11:42:27.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:42:27.622: INFO: namespace: e2e-tests-gc-n8fw6, resource: bindings, ignored listing per whitelist
Jul  7 11:42:27.680: INFO: namespace e2e-tests-gc-n8fw6 deletion completed in 6.150535384s

• [SLOW TEST:7.356 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:42:27.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul  7 11:42:37.870: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  7 11:42:38.008: INFO: Pod pod-with-poststart-http-hook still exists
Jul  7 11:42:40.009: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  7 11:42:40.013: INFO: Pod pod-with-poststart-http-hook still exists
Jul  7 11:42:42.009: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  7 11:42:42.013: INFO: Pod pod-with-poststart-http-hook still exists
Jul  7 11:42:44.009: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  7 11:42:44.013: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:42:44.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-4q7ck" for this suite.
Jul  7 11:43:08.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:43:08.128: INFO: namespace: e2e-tests-container-lifecycle-hook-4q7ck, resource: bindings, ignored listing per whitelist
Jul  7 11:43:08.184: INFO: namespace e2e-tests-container-lifecycle-hook-4q7ck deletion completed in 24.165913948s

• [SLOW TEST:40.503 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:43:08.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul  7 11:43:08.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-xrxmr'
Jul  7 11:43:10.841: INFO: stderr: ""
Jul  7 11:43:10.841: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Jul  7 11:43:10.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-xrxmr'
Jul  7 11:43:23.769: INFO: stderr: ""
Jul  7 11:43:23.769: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:43:23.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xrxmr" for this suite.
Jul  7 11:43:29.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:43:29.854: INFO: namespace: e2e-tests-kubectl-xrxmr, resource: bindings, ignored listing per whitelist
Jul  7 11:43:29.878: INFO: namespace e2e-tests-kubectl-xrxmr deletion completed in 6.088950195s

• [SLOW TEST:21.694 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:43:29.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-14083f84-c047-11ea-9ad7-0242ac11001b
STEP: Creating a pod to test consume secrets
Jul  7 11:43:30.338: INFO: Waiting up to 5m0s for pod "pod-secrets-14426233-c047-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-secrets-dmpb5" to be "success or failure"
Jul  7 11:43:31.033: INFO: Pod "pod-secrets-14426233-c047-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 694.694613ms
Jul  7 11:43:33.036: INFO: Pod "pod-secrets-14426233-c047-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.697907107s
Jul  7 11:43:35.056: INFO: Pod "pod-secrets-14426233-c047-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.718176622s
STEP: Saw pod success
Jul  7 11:43:35.056: INFO: Pod "pod-secrets-14426233-c047-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:43:35.059: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-14426233-c047-11ea-9ad7-0242ac11001b container secret-env-test: 
STEP: delete the pod
Jul  7 11:43:35.099: INFO: Waiting for pod pod-secrets-14426233-c047-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:43:35.104: INFO: Pod pod-secrets-14426233-c047-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:43:35.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-dmpb5" for this suite.
Jul  7 11:43:41.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:43:41.136: INFO: namespace: e2e-tests-secrets-dmpb5, resource: bindings, ignored listing per whitelist
Jul  7 11:43:41.233: INFO: namespace e2e-tests-secrets-dmpb5 deletion completed in 6.125008401s

• [SLOW TEST:11.355 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:43:41.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jul  7 11:43:45.973: INFO: Successfully updated pod "annotationupdate1ad6da34-c047-11ea-9ad7-0242ac11001b"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:43:48.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-4kgmh" for this suite.
Jul  7 11:44:10.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:44:10.076: INFO: namespace: e2e-tests-downward-api-4kgmh, resource: bindings, ignored listing per whitelist
Jul  7 11:44:10.113: INFO: namespace e2e-tests-downward-api-4kgmh deletion completed in 22.099406757s

• [SLOW TEST:28.880 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:44:10.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-xbsjd.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-xbsjd.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-xbsjd.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-xbsjd.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-xbsjd.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-xbsjd.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  7 11:44:16.344: INFO: DNS probes using e2e-tests-dns-xbsjd/dns-test-2c0c932a-c047-11ea-9ad7-0242ac11001b succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:44:16.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-xbsjd" for this suite.
Jul  7 11:44:22.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:44:22.411: INFO: namespace: e2e-tests-dns-xbsjd, resource: bindings, ignored listing per whitelist
Jul  7 11:44:22.479: INFO: namespace e2e-tests-dns-xbsjd deletion completed in 6.090184476s

• [SLOW TEST:12.366 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:44:22.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Jul  7 11:44:26.636: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-3364fd17-c047-11ea-9ad7-0242ac11001b", GenerateName:"", Namespace:"e2e-tests-pods-bcmwv", SelfLink:"/api/v1/namespaces/e2e-tests-pods-bcmwv/pods/pod-submit-remove-3364fd17-c047-11ea-9ad7-0242ac11001b", UID:"3368efd8-c047-11ea-a300-0242ac110004", ResourceVersion:"599717", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729719062, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"568895606"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-6fm6r", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0015a1280), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6fm6r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001f4a9f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001f6ba40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f4aec0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f4b1d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001f4b1d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001f4b1dc)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729719062, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729719065, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729719065, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729719062, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.3", PodIP:"10.244.2.154", StartTime:(*v1.Time)(0xc000973580), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0009735a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://fd2ea0f43646eaeab0241c6a7a1dd86772d3d4a755f162f8fd51e6c3f0ea739c"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:44:33.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-bcmwv" for this suite.
Jul  7 11:44:39.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:44:39.863: INFO: namespace: e2e-tests-pods-bcmwv, resource: bindings, ignored listing per whitelist
Jul  7 11:44:39.877: INFO: namespace e2e-tests-pods-bcmwv deletion completed in 6.109549129s

• [SLOW TEST:17.398 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:44:39.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-3dc8644c-c047-11ea-9ad7-0242ac11001b
STEP: Creating a pod to test consume configMaps
Jul  7 11:44:40.023: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3dca7efd-c047-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-cfzbb" to be "success or failure"
Jul  7 11:44:40.028: INFO: Pod "pod-projected-configmaps-3dca7efd-c047-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.708365ms
Jul  7 11:44:42.034: INFO: Pod "pod-projected-configmaps-3dca7efd-c047-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011805166s
Jul  7 11:44:44.081: INFO: Pod "pod-projected-configmaps-3dca7efd-c047-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058666007s
STEP: Saw pod success
Jul  7 11:44:44.081: INFO: Pod "pod-projected-configmaps-3dca7efd-c047-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:44:44.084: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-3dca7efd-c047-11ea-9ad7-0242ac11001b container projected-configmap-volume-test: 
STEP: delete the pod
Jul  7 11:44:44.108: INFO: Waiting for pod pod-projected-configmaps-3dca7efd-c047-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:44:44.132: INFO: Pod pod-projected-configmaps-3dca7efd-c047-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:44:44.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cfzbb" for this suite.
Jul  7 11:44:50.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:44:50.186: INFO: namespace: e2e-tests-projected-cfzbb, resource: bindings, ignored listing per whitelist
Jul  7 11:44:50.248: INFO: namespace e2e-tests-projected-cfzbb deletion completed in 6.113517161s

• [SLOW TEST:10.371 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:44:50.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-43f51840-c047-11ea-9ad7-0242ac11001b
STEP: Creating a pod to test consume configMaps
Jul  7 11:44:50.431: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-43f64abd-c047-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-r9wkm" to be "success or failure"
Jul  7 11:44:50.479: INFO: Pod "pod-projected-configmaps-43f64abd-c047-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 47.717948ms
Jul  7 11:44:52.483: INFO: Pod "pod-projected-configmaps-43f64abd-c047-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051906388s
Jul  7 11:44:54.488: INFO: Pod "pod-projected-configmaps-43f64abd-c047-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056605157s
STEP: Saw pod success
Jul  7 11:44:54.488: INFO: Pod "pod-projected-configmaps-43f64abd-c047-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:44:54.491: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-43f64abd-c047-11ea-9ad7-0242ac11001b container projected-configmap-volume-test: 
STEP: delete the pod
Jul  7 11:44:54.509: INFO: Waiting for pod pod-projected-configmaps-43f64abd-c047-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:44:54.530: INFO: Pod pod-projected-configmaps-43f64abd-c047-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:44:54.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-r9wkm" for this suite.
Jul  7 11:45:00.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:45:00.626: INFO: namespace: e2e-tests-projected-r9wkm, resource: bindings, ignored listing per whitelist
Jul  7 11:45:00.626: INFO: namespace e2e-tests-projected-r9wkm deletion completed in 6.091764428s

• [SLOW TEST:10.377 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:45:00.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jul  7 11:45:00.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xdkfc'
Jul  7 11:45:01.030: INFO: stderr: ""
Jul  7 11:45:01.030: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  7 11:45:01.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xdkfc'
Jul  7 11:45:01.199: INFO: stderr: ""
Jul  7 11:45:01.199: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Jul  7 11:45:06.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-xdkfc'
Jul  7 11:45:06.315: INFO: stderr: ""
Jul  7 11:45:06.316: INFO: stdout: "update-demo-nautilus-7grhd update-demo-nautilus-qqrd7 "
Jul  7 11:45:06.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7grhd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xdkfc'
Jul  7 11:45:06.414: INFO: stderr: ""
Jul  7 11:45:06.414: INFO: stdout: "true"
Jul  7 11:45:06.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7grhd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xdkfc'
Jul  7 11:45:06.512: INFO: stderr: ""
Jul  7 11:45:06.512: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  7 11:45:06.512: INFO: validating pod update-demo-nautilus-7grhd
Jul  7 11:45:06.516: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  7 11:45:06.516: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  7 11:45:06.516: INFO: update-demo-nautilus-7grhd is verified up and running
Jul  7 11:45:06.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qqrd7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xdkfc'
Jul  7 11:45:06.617: INFO: stderr: ""
Jul  7 11:45:06.617: INFO: stdout: "true"
Jul  7 11:45:06.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qqrd7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-xdkfc'
Jul  7 11:45:06.711: INFO: stderr: ""
Jul  7 11:45:06.711: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  7 11:45:06.711: INFO: validating pod update-demo-nautilus-qqrd7
Jul  7 11:45:06.715: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  7 11:45:06.715: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  7 11:45:06.715: INFO: update-demo-nautilus-qqrd7 is verified up and running
STEP: using delete to clean up resources
Jul  7 11:45:06.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-xdkfc'
Jul  7 11:45:06.829: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  7 11:45:06.829: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul  7 11:45:06.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-xdkfc'
Jul  7 11:45:06.945: INFO: stderr: "No resources found.\n"
Jul  7 11:45:06.946: INFO: stdout: ""
Jul  7 11:45:06.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-xdkfc -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  7 11:45:07.044: INFO: stderr: ""
Jul  7 11:45:07.044: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:45:07.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xdkfc" for this suite.
Jul  7 11:45:13.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:45:13.172: INFO: namespace: e2e-tests-kubectl-xdkfc, resource: bindings, ignored listing per whitelist
Jul  7 11:45:13.172: INFO: namespace e2e-tests-kubectl-xdkfc deletion completed in 6.123786267s

• [SLOW TEST:12.545 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:45:13.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-mwv59
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul  7 11:45:13.304: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul  7 11:45:35.410: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.157 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-mwv59 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 11:45:35.410: INFO: >>> kubeConfig: /root/.kube/config
I0707 11:45:35.446027       6 log.go:172] (0xc000a278c0) (0xc00174a5a0) Create stream
I0707 11:45:35.446065       6 log.go:172] (0xc000a278c0) (0xc00174a5a0) Stream added, broadcasting: 1
I0707 11:45:35.448548       6 log.go:172] (0xc000a278c0) Reply frame received for 1
I0707 11:45:35.448595       6 log.go:172] (0xc000a278c0) (0xc001789900) Create stream
I0707 11:45:35.448609       6 log.go:172] (0xc000a278c0) (0xc001789900) Stream added, broadcasting: 3
I0707 11:45:35.449807       6 log.go:172] (0xc000a278c0) Reply frame received for 3
I0707 11:45:35.449845       6 log.go:172] (0xc000a278c0) (0xc0017899a0) Create stream
I0707 11:45:35.449858       6 log.go:172] (0xc000a278c0) (0xc0017899a0) Stream added, broadcasting: 5
I0707 11:45:35.450947       6 log.go:172] (0xc000a278c0) Reply frame received for 5
I0707 11:45:36.521024       6 log.go:172] (0xc000a278c0) Data frame received for 3
I0707 11:45:36.521101       6 log.go:172] (0xc001789900) (3) Data frame handling
I0707 11:45:36.521293       6 log.go:172] (0xc001789900) (3) Data frame sent
I0707 11:45:36.521315       6 log.go:172] (0xc000a278c0) Data frame received for 3
I0707 11:45:36.521331       6 log.go:172] (0xc001789900) (3) Data frame handling
I0707 11:45:36.521350       6 log.go:172] (0xc000a278c0) Data frame received for 5
I0707 11:45:36.521362       6 log.go:172] (0xc0017899a0) (5) Data frame handling
I0707 11:45:36.523148       6 log.go:172] (0xc000a278c0) Data frame received for 1
I0707 11:45:36.523186       6 log.go:172] (0xc00174a5a0) (1) Data frame handling
I0707 11:45:36.523210       6 log.go:172] (0xc00174a5a0) (1) Data frame sent
I0707 11:45:36.523226       6 log.go:172] (0xc000a278c0) (0xc00174a5a0) Stream removed, broadcasting: 1
I0707 11:45:36.523245       6 log.go:172] (0xc000a278c0) Go away received
I0707 11:45:36.523382       6 log.go:172] (0xc000a278c0) (0xc00174a5a0) Stream removed, broadcasting: 1
I0707 11:45:36.523404       6 log.go:172] (0xc000a278c0) (0xc001789900) Stream removed, broadcasting: 3
I0707 11:45:36.523419       6 log.go:172] (0xc000a278c0) (0xc0017899a0) Stream removed, broadcasting: 5
Jul  7 11:45:36.523: INFO: Found all expected endpoints: [netserver-0]
Jul  7 11:45:36.526: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.54 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-mwv59 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 11:45:36.526: INFO: >>> kubeConfig: /root/.kube/config
I0707 11:45:36.560445       6 log.go:172] (0xc000a27d90) (0xc00174a8c0) Create stream
I0707 11:45:36.560467       6 log.go:172] (0xc000a27d90) (0xc00174a8c0) Stream added, broadcasting: 1
I0707 11:45:36.563046       6 log.go:172] (0xc000a27d90) Reply frame received for 1
I0707 11:45:36.563074       6 log.go:172] (0xc000a27d90) (0xc001789a40) Create stream
I0707 11:45:36.563088       6 log.go:172] (0xc000a27d90) (0xc001789a40) Stream added, broadcasting: 3
I0707 11:45:36.563930       6 log.go:172] (0xc000a27d90) Reply frame received for 3
I0707 11:45:36.563960       6 log.go:172] (0xc000a27d90) (0xc0020d6640) Create stream
I0707 11:45:36.563970       6 log.go:172] (0xc000a27d90) (0xc0020d6640) Stream added, broadcasting: 5
I0707 11:45:36.564646       6 log.go:172] (0xc000a27d90) Reply frame received for 5
I0707 11:45:37.621754       6 log.go:172] (0xc000a27d90) Data frame received for 3
I0707 11:45:37.621786       6 log.go:172] (0xc001789a40) (3) Data frame handling
I0707 11:45:37.621810       6 log.go:172] (0xc001789a40) (3) Data frame sent
I0707 11:45:37.622285       6 log.go:172] (0xc000a27d90) Data frame received for 5
I0707 11:45:37.622317       6 log.go:172] (0xc0020d6640) (5) Data frame handling
I0707 11:45:37.622659       6 log.go:172] (0xc000a27d90) Data frame received for 3
I0707 11:45:37.622695       6 log.go:172] (0xc001789a40) (3) Data frame handling
I0707 11:45:37.624624       6 log.go:172] (0xc000a27d90) Data frame received for 1
I0707 11:45:37.624648       6 log.go:172] (0xc00174a8c0) (1) Data frame handling
I0707 11:45:37.624666       6 log.go:172] (0xc00174a8c0) (1) Data frame sent
I0707 11:45:37.624680       6 log.go:172] (0xc000a27d90) (0xc00174a8c0) Stream removed, broadcasting: 1
I0707 11:45:37.624705       6 log.go:172] (0xc000a27d90) Go away received
I0707 11:45:37.624850       6 log.go:172] (0xc000a27d90) (0xc00174a8c0) Stream removed, broadcasting: 1
I0707 11:45:37.624883       6 log.go:172] (0xc000a27d90) (0xc001789a40) Stream removed, broadcasting: 3
I0707 11:45:37.624913       6 log.go:172] (0xc000a27d90) (0xc0020d6640) Stream removed, broadcasting: 5
Jul  7 11:45:37.624: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:45:37.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-mwv59" for this suite.
Jul  7 11:46:01.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:46:01.700: INFO: namespace: e2e-tests-pod-network-test-mwv59, resource: bindings, ignored listing per whitelist
Jul  7 11:46:01.719: INFO: namespace e2e-tests-pod-network-test-mwv59 deletion completed in 24.08937372s

• [SLOW TEST:48.547 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:46:01.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  7 11:46:01.831: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6e8c66d7-c047-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-downward-api-dg4nk" to be "success or failure"
Jul  7 11:46:01.838: INFO: Pod "downwardapi-volume-6e8c66d7-c047-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.47362ms
Jul  7 11:46:04.004: INFO: Pod "downwardapi-volume-6e8c66d7-c047-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173070033s
Jul  7 11:46:06.009: INFO: Pod "downwardapi-volume-6e8c66d7-c047-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.177974019s
STEP: Saw pod success
Jul  7 11:46:06.009: INFO: Pod "downwardapi-volume-6e8c66d7-c047-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:46:06.011: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-6e8c66d7-c047-11ea-9ad7-0242ac11001b container client-container: 
STEP: delete the pod
Jul  7 11:46:06.269: INFO: Waiting for pod downwardapi-volume-6e8c66d7-c047-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:46:06.300: INFO: Pod downwardapi-volume-6e8c66d7-c047-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:46:06.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-dg4nk" for this suite.
Jul  7 11:46:12.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:46:12.369: INFO: namespace: e2e-tests-downward-api-dg4nk, resource: bindings, ignored listing per whitelist
Jul  7 11:46:12.407: INFO: namespace e2e-tests-downward-api-dg4nk deletion completed in 6.10339011s

• [SLOW TEST:10.687 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:46:12.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:47:12.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-zbssm" for this suite.
Jul  7 11:47:36.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:47:36.797: INFO: namespace: e2e-tests-container-probe-zbssm, resource: bindings, ignored listing per whitelist
Jul  7 11:47:36.850: INFO: namespace e2e-tests-container-probe-zbssm deletion completed in 24.308843528s

• [SLOW TEST:84.444 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:47:36.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jul  7 11:47:47.024: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  7 11:47:47.048: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  7 11:47:49.048: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  7 11:47:49.052: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  7 11:47:51.048: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  7 11:47:51.052: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  7 11:47:53.048: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  7 11:47:53.052: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  7 11:47:55.048: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  7 11:47:55.052: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  7 11:47:57.048: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  7 11:47:57.052: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  7 11:47:59.048: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  7 11:47:59.052: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  7 11:48:01.048: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  7 11:48:01.052: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  7 11:48:03.048: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  7 11:48:03.052: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  7 11:48:05.048: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  7 11:48:05.052: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  7 11:48:07.048: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  7 11:48:07.052: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  7 11:48:09.048: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  7 11:48:09.052: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  7 11:48:11.048: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  7 11:48:11.052: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  7 11:48:13.048: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  7 11:48:13.052: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  7 11:48:15.048: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  7 11:48:15.051: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:48:15.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-8s67m" for this suite.
Jul  7 11:48:37.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:48:37.138: INFO: namespace: e2e-tests-container-lifecycle-hook-8s67m, resource: bindings, ignored listing per whitelist
Jul  7 11:48:37.157: INFO: namespace e2e-tests-container-lifecycle-hook-8s67m deletion completed in 22.095899718s

• [SLOW TEST:60.307 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:48:37.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul  7 11:48:37.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-265n9'
Jul  7 11:48:37.384: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  7 11:48:37.384: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jul  7 11:48:37.409: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jul  7 11:48:37.443: INFO: scanned /root for discovery docs: 
Jul  7 11:48:37.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-265n9'
Jul  7 11:48:53.311: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jul  7 11:48:53.311: INFO: stdout: "Created e2e-test-nginx-rc-238336f8402d189a659a9b32ef96dc14\nScaling up e2e-test-nginx-rc-238336f8402d189a659a9b32ef96dc14 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-238336f8402d189a659a9b32ef96dc14 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-238336f8402d189a659a9b32ef96dc14 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jul  7 11:48:53.311: INFO: stdout: "Created e2e-test-nginx-rc-238336f8402d189a659a9b32ef96dc14\nScaling up e2e-test-nginx-rc-238336f8402d189a659a9b32ef96dc14 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-238336f8402d189a659a9b32ef96dc14 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-238336f8402d189a659a9b32ef96dc14 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jul  7 11:48:53.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-265n9'
Jul  7 11:48:53.412: INFO: stderr: ""
Jul  7 11:48:53.412: INFO: stdout: "e2e-test-nginx-rc-238336f8402d189a659a9b32ef96dc14-4xdq8 e2e-test-nginx-rc-r52ps "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jul  7 11:48:58.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-265n9'
Jul  7 11:48:58.542: INFO: stderr: ""
Jul  7 11:48:58.542: INFO: stdout: "e2e-test-nginx-rc-238336f8402d189a659a9b32ef96dc14-4xdq8 "
Jul  7 11:48:58.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-238336f8402d189a659a9b32ef96dc14-4xdq8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-265n9'
Jul  7 11:48:58.670: INFO: stderr: ""
Jul  7 11:48:58.670: INFO: stdout: "true"
Jul  7 11:48:58.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-238336f8402d189a659a9b32ef96dc14-4xdq8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-265n9'
Jul  7 11:48:58.775: INFO: stderr: ""
Jul  7 11:48:58.775: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jul  7 11:48:58.775: INFO: e2e-test-nginx-rc-238336f8402d189a659a9b32ef96dc14-4xdq8 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jul  7 11:48:58.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-265n9'
Jul  7 11:48:58.894: INFO: stderr: ""
Jul  7 11:48:58.894: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:48:58.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-265n9" for this suite.
Jul  7 11:49:04.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:49:04.952: INFO: namespace: e2e-tests-kubectl-265n9, resource: bindings, ignored listing per whitelist
Jul  7 11:49:05.012: INFO: namespace e2e-tests-kubectl-265n9 deletion completed in 6.114082369s

• [SLOW TEST:27.855 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:49:05.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jul  7 11:49:05.124: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jul  7 11:49:05.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ngpd6'
Jul  7 11:49:05.430: INFO: stderr: ""
Jul  7 11:49:05.430: INFO: stdout: "service/redis-slave created\n"
Jul  7 11:49:05.430: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jul  7 11:49:05.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ngpd6'
Jul  7 11:49:05.812: INFO: stderr: ""
Jul  7 11:49:05.812: INFO: stdout: "service/redis-master created\n"
Jul  7 11:49:05.813: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jul  7 11:49:05.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ngpd6'
Jul  7 11:49:06.116: INFO: stderr: ""
Jul  7 11:49:06.116: INFO: stdout: "service/frontend created\n"
Jul  7 11:49:06.116: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jul  7 11:49:06.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ngpd6'
Jul  7 11:49:06.390: INFO: stderr: ""
Jul  7 11:49:06.390: INFO: stdout: "deployment.extensions/frontend created\n"
Jul  7 11:49:06.390: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jul  7 11:49:06.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ngpd6'
Jul  7 11:49:06.783: INFO: stderr: ""
Jul  7 11:49:06.783: INFO: stdout: "deployment.extensions/redis-master created\n"
Jul  7 11:49:06.784: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jul  7 11:49:06.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ngpd6'
Jul  7 11:49:07.105: INFO: stderr: ""
Jul  7 11:49:07.105: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jul  7 11:49:07.105: INFO: Waiting for all frontend pods to be Running.
Jul  7 11:49:17.156: INFO: Waiting for frontend to serve content.
Jul  7 11:49:17.172: INFO: Trying to add a new entry to the guestbook.
Jul  7 11:49:17.185: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jul  7 11:49:17.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ngpd6'
Jul  7 11:49:17.456: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  7 11:49:17.456: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jul  7 11:49:17.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ngpd6'
Jul  7 11:49:17.692: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  7 11:49:17.692: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jul  7 11:49:17.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ngpd6'
Jul  7 11:49:17.849: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  7 11:49:17.849: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul  7 11:49:17.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ngpd6'
Jul  7 11:49:17.957: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  7 11:49:17.957: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul  7 11:49:17.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ngpd6'
Jul  7 11:49:18.076: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  7 11:49:18.076: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jul  7 11:49:18.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ngpd6'
Jul  7 11:49:18.478: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  7 11:49:18.478: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:49:18.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ngpd6" for this suite.
Jul  7 11:50:06.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:50:06.682: INFO: namespace: e2e-tests-kubectl-ngpd6, resource: bindings, ignored listing per whitelist
Jul  7 11:50:06.752: INFO: namespace e2e-tests-kubectl-ngpd6 deletion completed in 48.259016124s

• [SLOW TEST:61.740 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:50:06.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jul  7 11:50:06.894: INFO: Waiting up to 5m0s for pod "var-expansion-009f1560-c048-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-var-expansion-zxf2d" to be "success or failure"
Jul  7 11:50:06.898: INFO: Pod "var-expansion-009f1560-c048-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.359711ms
Jul  7 11:50:08.902: INFO: Pod "var-expansion-009f1560-c048-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007131725s
Jul  7 11:50:10.906: INFO: Pod "var-expansion-009f1560-c048-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011658951s
STEP: Saw pod success
Jul  7 11:50:10.906: INFO: Pod "var-expansion-009f1560-c048-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:50:10.909: INFO: Trying to get logs from node hunter-worker pod var-expansion-009f1560-c048-11ea-9ad7-0242ac11001b container dapi-container: 
STEP: delete the pod
Jul  7 11:50:10.993: INFO: Waiting for pod var-expansion-009f1560-c048-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:50:11.132: INFO: Pod var-expansion-009f1560-c048-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:50:11.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-zxf2d" for this suite.
Jul  7 11:50:17.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:50:17.225: INFO: namespace: e2e-tests-var-expansion-zxf2d, resource: bindings, ignored listing per whitelist
Jul  7 11:50:17.238: INFO: namespace e2e-tests-var-expansion-zxf2d deletion completed in 6.102622479s

• [SLOW TEST:10.485 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:50:17.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jul  7 11:50:29.678: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  7 11:50:29.686: INFO: Pod pod-with-prestop-http-hook still exists
Jul  7 11:50:31.686: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  7 11:50:31.690: INFO: Pod pod-with-prestop-http-hook still exists
Jul  7 11:50:33.686: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  7 11:50:33.691: INFO: Pod pod-with-prestop-http-hook still exists
Jul  7 11:50:35.687: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  7 11:50:35.691: INFO: Pod pod-with-prestop-http-hook still exists
Jul  7 11:50:37.687: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  7 11:50:37.691: INFO: Pod pod-with-prestop-http-hook still exists
Jul  7 11:50:39.686: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  7 11:50:39.691: INFO: Pod pod-with-prestop-http-hook still exists
Jul  7 11:50:41.687: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  7 11:50:41.691: INFO: Pod pod-with-prestop-http-hook still exists
Jul  7 11:50:43.686: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  7 11:50:43.695: INFO: Pod pod-with-prestop-http-hook still exists
Jul  7 11:50:45.686: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  7 11:50:45.691: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:50:45.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-mp57f" for this suite.
Jul  7 11:51:07.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:51:07.788: INFO: namespace: e2e-tests-container-lifecycle-hook-mp57f, resource: bindings, ignored listing per whitelist
Jul  7 11:51:07.814: INFO: namespace e2e-tests-container-lifecycle-hook-mp57f deletion completed in 22.110560643s

• [SLOW TEST:50.575 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:51:07.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:51:07.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-7m625" for this suite.
Jul  7 11:51:13.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:51:14.001: INFO: namespace: e2e-tests-services-7m625, resource: bindings, ignored listing per whitelist
Jul  7 11:51:14.039: INFO: namespace e2e-tests-services-7m625 deletion completed in 6.089555807s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.225 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:51:14.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jul  7 11:51:14.151: INFO: Pod name pod-release: Found 0 pods out of 1
Jul  7 11:51:19.156: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:51:20.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-wpwq9" for this suite.
Jul  7 11:51:26.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:51:26.270: INFO: namespace: e2e-tests-replication-controller-wpwq9, resource: bindings, ignored listing per whitelist
Jul  7 11:51:26.313: INFO: namespace e2e-tests-replication-controller-wpwq9 deletion completed in 6.129948499s

• [SLOW TEST:12.274 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:51:26.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-30013af9-c048-11ea-9ad7-0242ac11001b
STEP: Creating a pod to test consume secrets
Jul  7 11:51:26.408: INFO: Waiting up to 5m0s for pod "pod-secrets-3003b173-c048-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-secrets-dmvqg" to be "success or failure"
Jul  7 11:51:26.412: INFO: Pod "pod-secrets-3003b173-c048-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.272091ms
Jul  7 11:51:28.480: INFO: Pod "pod-secrets-3003b173-c048-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072293455s
Jul  7 11:51:30.484: INFO: Pod "pod-secrets-3003b173-c048-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076282213s
STEP: Saw pod success
Jul  7 11:51:30.484: INFO: Pod "pod-secrets-3003b173-c048-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:51:30.488: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-3003b173-c048-11ea-9ad7-0242ac11001b container secret-volume-test: 
STEP: delete the pod
Jul  7 11:51:30.509: INFO: Waiting for pod pod-secrets-3003b173-c048-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:51:30.514: INFO: Pod pod-secrets-3003b173-c048-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:51:30.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-dmvqg" for this suite.
Jul  7 11:51:36.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:51:36.607: INFO: namespace: e2e-tests-secrets-dmvqg, resource: bindings, ignored listing per whitelist
Jul  7 11:51:36.629: INFO: namespace e2e-tests-secrets-dmvqg deletion completed in 6.112932945s

• [SLOW TEST:10.315 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:51:36.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-362dc500-c048-11ea-9ad7-0242ac11001b
STEP: Creating configMap with name cm-test-opt-upd-362dc56a-c048-11ea-9ad7-0242ac11001b
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-362dc500-c048-11ea-9ad7-0242ac11001b
STEP: Updating configmap cm-test-opt-upd-362dc56a-c048-11ea-9ad7-0242ac11001b
STEP: Creating configMap with name cm-test-opt-create-362dc59b-c048-11ea-9ad7-0242ac11001b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:53:06.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-g4snk" for this suite.
Jul  7 11:53:28.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:53:28.965: INFO: namespace: e2e-tests-projected-g4snk, resource: bindings, ignored listing per whitelist
Jul  7 11:53:28.971: INFO: namespace e2e-tests-projected-g4snk deletion completed in 22.125138194s

• [SLOW TEST:112.341 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:53:28.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jul  7 11:53:33.153: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-7926d9c0-c048-11ea-9ad7-0242ac11001b,GenerateName:,Namespace:e2e-tests-events-xvf8z,SelfLink:/api/v1/namespaces/e2e-tests-events-xvf8z/pods/send-events-7926d9c0-c048-11ea-9ad7-0242ac11001b,UID:792ab18f-c048-11ea-a300-0242ac110004,ResourceVersion:601502,Generation:0,CreationTimestamp:2020-07-07 11:53:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 98906331,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p28zr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p28zr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-p28zr true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f4a610} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f4a630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:53:29 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:53:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:53:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 11:53:29 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.2.167,StartTime:2020-07-07 11:53:29 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-07-07 11:53:32 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://b939ac135b9a474c1de34a90e05a247be6587b459bcda9c6e6ed3be8a238c013}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jul  7 11:53:35.158: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jul  7 11:53:37.163: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:53:37.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-xvf8z" for this suite.
Jul  7 11:54:17.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:54:17.205: INFO: namespace: e2e-tests-events-xvf8z, resource: bindings, ignored listing per whitelist
Jul  7 11:54:17.275: INFO: namespace e2e-tests-events-xvf8z deletion completed in 40.095685524s

• [SLOW TEST:48.304 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:54:17.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  7 11:54:17.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jul  7 11:54:17.588: INFO: stderr: ""
Jul  7 11:54:17.588: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-07-07T09:19:16Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:50:51Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:54:17.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2dt7t" for this suite.
Jul  7 11:54:23.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:54:23.671: INFO: namespace: e2e-tests-kubectl-2dt7t, resource: bindings, ignored listing per whitelist
Jul  7 11:54:23.743: INFO: namespace e2e-tests-kubectl-2dt7t deletion completed in 6.150100004s

• [SLOW TEST:6.468 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:54:23.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  7 11:54:23.882: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99cb1b01-c048-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-6gs4v" to be "success or failure"
Jul  7 11:54:23.885: INFO: Pod "downwardapi-volume-99cb1b01-c048-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.269685ms
Jul  7 11:54:25.888: INFO: Pod "downwardapi-volume-99cb1b01-c048-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006496033s
Jul  7 11:54:27.942: INFO: Pod "downwardapi-volume-99cb1b01-c048-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060832647s
STEP: Saw pod success
Jul  7 11:54:27.942: INFO: Pod "downwardapi-volume-99cb1b01-c048-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:54:27.945: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-99cb1b01-c048-11ea-9ad7-0242ac11001b container client-container: 
STEP: delete the pod
Jul  7 11:54:28.132: INFO: Waiting for pod downwardapi-volume-99cb1b01-c048-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:54:28.177: INFO: Pod downwardapi-volume-99cb1b01-c048-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:54:28.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6gs4v" for this suite.
Jul  7 11:54:34.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:54:34.349: INFO: namespace: e2e-tests-projected-6gs4v, resource: bindings, ignored listing per whitelist
Jul  7 11:54:34.357: INFO: namespace e2e-tests-projected-6gs4v deletion completed in 6.17724259s

• [SLOW TEST:10.614 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:54:34.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul  7 11:54:34.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-kx649'
Jul  7 11:54:47.955: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  7 11:54:47.955: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Jul  7 11:54:50.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-kx649'
Jul  7 11:54:50.302: INFO: stderr: ""
Jul  7 11:54:50.302: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:54:50.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-kx649" for this suite.
Jul  7 11:56:52.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:56:52.599: INFO: namespace: e2e-tests-kubectl-kx649, resource: bindings, ignored listing per whitelist
Jul  7 11:56:52.666: INFO: namespace e2e-tests-kubectl-kx649 deletion completed in 2m2.311911623s

• [SLOW TEST:138.308 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:56:52.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-p6sjv
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Jul  7 11:56:52.839: INFO: Found 0 stateful pods, waiting for 3
Jul  7 11:57:02.844: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 11:57:02.844: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 11:57:02.844: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul  7 11:57:12.850: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 11:57:12.850: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 11:57:12.850: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jul  7 11:57:12.883: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jul  7 11:57:22.919: INFO: Updating stateful set ss2
Jul  7 11:57:22.939: INFO: Waiting for Pod e2e-tests-statefulset-p6sjv/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jul  7 11:57:33.111: INFO: Found 2 stateful pods, waiting for 3
Jul  7 11:57:43.116: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 11:57:43.116: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 11:57:43.116: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jul  7 11:57:43.139: INFO: Updating stateful set ss2
Jul  7 11:57:43.179: INFO: Waiting for Pod e2e-tests-statefulset-p6sjv/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul  7 11:57:53.186: INFO: Waiting for Pod e2e-tests-statefulset-p6sjv/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul  7 11:58:03.203: INFO: Updating stateful set ss2
Jul  7 11:58:03.264: INFO: Waiting for StatefulSet e2e-tests-statefulset-p6sjv/ss2 to complete update
Jul  7 11:58:03.264: INFO: Waiting for Pod e2e-tests-statefulset-p6sjv/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul  7 11:58:13.272: INFO: Deleting all statefulset in ns e2e-tests-statefulset-p6sjv
Jul  7 11:58:13.275: INFO: Scaling statefulset ss2 to 0
Jul  7 11:58:43.314: INFO: Waiting for statefulset status.replicas updated to 0
Jul  7 11:58:43.317: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:58:43.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-p6sjv" for this suite.
Jul  7 11:58:49.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:58:49.502: INFO: namespace: e2e-tests-statefulset-p6sjv, resource: bindings, ignored listing per whitelist
Jul  7 11:58:49.508: INFO: namespace e2e-tests-statefulset-p6sjv deletion completed in 6.113050918s

• [SLOW TEST:116.841 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:58:49.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  7 11:58:49.747: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"38374b8a-c049-11ea-a300-0242ac110004", Controller:(*bool)(0xc0013cdca2), BlockOwnerDeletion:(*bool)(0xc0013cdca3)}}
Jul  7 11:58:49.823: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"38355358-c049-11ea-a300-0242ac110004", Controller:(*bool)(0xc001ddde56), BlockOwnerDeletion:(*bool)(0xc001ddde57)}}
Jul  7 11:58:49.915: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"3835e416-c049-11ea-a300-0242ac110004", Controller:(*bool)(0xc00159e3aa), BlockOwnerDeletion:(*bool)(0xc00159e3ab)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:58:54.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-pkqsp" for this suite.
Jul  7 11:59:01.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:59:01.187: INFO: namespace: e2e-tests-gc-pkqsp, resource: bindings, ignored listing per whitelist
Jul  7 11:59:01.202: INFO: namespace e2e-tests-gc-pkqsp deletion completed in 6.251684955s

• [SLOW TEST:11.694 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:59:01.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-3f27f2c2-c049-11ea-9ad7-0242ac11001b
STEP: Creating a pod to test consume secrets
Jul  7 11:59:01.342: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3f2a19bb-c049-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-pqqwv" to be "success or failure"
Jul  7 11:59:01.365: INFO: Pod "pod-projected-secrets-3f2a19bb-c049-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.246285ms
Jul  7 11:59:03.369: INFO: Pod "pod-projected-secrets-3f2a19bb-c049-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027131066s
Jul  7 11:59:05.374: INFO: Pod "pod-projected-secrets-3f2a19bb-c049-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031904841s
STEP: Saw pod success
Jul  7 11:59:05.374: INFO: Pod "pod-projected-secrets-3f2a19bb-c049-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:59:05.377: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-3f2a19bb-c049-11ea-9ad7-0242ac11001b container projected-secret-volume-test: 
STEP: delete the pod
Jul  7 11:59:05.505: INFO: Waiting for pod pod-projected-secrets-3f2a19bb-c049-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:59:05.645: INFO: Pod pod-projected-secrets-3f2a19bb-c049-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:59:05.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pqqwv" for this suite.
Jul  7 11:59:11.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:59:11.753: INFO: namespace: e2e-tests-projected-pqqwv, resource: bindings, ignored listing per whitelist
Jul  7 11:59:11.778: INFO: namespace e2e-tests-projected-pqqwv deletion completed in 6.128284535s

• [SLOW TEST:10.576 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:59:11.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jul  7 11:59:11.946: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  7 11:59:11.954: INFO: Waiting for terminating namespaces to be deleted...
Jul  7 11:59:11.956: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Jul  7 11:59:11.961: INFO: kindnet-mcn92 from kube-system started at 2020-07-04 07:47:46 +0000 UTC (1 container statuses recorded)
Jul  7 11:59:11.961: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  7 11:59:11.961: INFO: kube-proxy-cqbm8 from kube-system started at 2020-07-04 07:47:44 +0000 UTC (1 container statuses recorded)
Jul  7 11:59:11.961: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  7 11:59:11.961: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Jul  7 11:59:11.969: INFO: coredns-54ff9cd656-mgg2q from kube-system started at 2020-07-04 07:48:14 +0000 UTC (1 container statuses recorded)
Jul  7 11:59:11.969: INFO: 	Container coredns ready: true, restart count 0
Jul  7 11:59:11.969: INFO: coredns-54ff9cd656-l7q92 from kube-system started at 2020-07-04 07:48:14 +0000 UTC (1 container statuses recorded)
Jul  7 11:59:11.969: INFO: 	Container coredns ready: true, restart count 0
Jul  7 11:59:11.969: INFO: kube-proxy-52vr2 from kube-system started at 2020-07-04 07:47:44 +0000 UTC (1 container statuses recorded)
Jul  7 11:59:11.969: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  7 11:59:11.969: INFO: kindnet-rll2b from kube-system started at 2020-07-04 07:47:46 +0000 UTC (1 container statuses recorded)
Jul  7 11:59:11.969: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  7 11:59:11.969: INFO: local-path-provisioner-674595c7-cvgpb from local-path-storage started at 2020-07-04 07:48:14 +0000 UTC (1 container statuses recorded)
Jul  7 11:59:11.969: INFO: 	Container local-path-provisioner ready: true, restart count 2
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.161f768f8a5f0cd2], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:59:12.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-8c477" for this suite.
Jul  7 11:59:19.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:59:19.048: INFO: namespace: e2e-tests-sched-pred-8c477, resource: bindings, ignored listing per whitelist
Jul  7 11:59:19.092: INFO: namespace e2e-tests-sched-pred-8c477 deletion completed in 6.097880811s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.314 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:59:19.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul  7 11:59:19.234: INFO: Waiting up to 5m0s for pod "pod-49d4a86b-c049-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-emptydir-5gtxs" to be "success or failure"
Jul  7 11:59:19.244: INFO: Pod "pod-49d4a86b-c049-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.148201ms
Jul  7 11:59:21.430: INFO: Pod "pod-49d4a86b-c049-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195502853s
Jul  7 11:59:23.435: INFO: Pod "pod-49d4a86b-c049-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.200003546s
STEP: Saw pod success
Jul  7 11:59:23.435: INFO: Pod "pod-49d4a86b-c049-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 11:59:23.438: INFO: Trying to get logs from node hunter-worker pod pod-49d4a86b-c049-11ea-9ad7-0242ac11001b container test-container: 
STEP: delete the pod
Jul  7 11:59:23.702: INFO: Waiting for pod pod-49d4a86b-c049-11ea-9ad7-0242ac11001b to disappear
Jul  7 11:59:23.776: INFO: Pod pod-49d4a86b-c049-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 11:59:23.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-5gtxs" for this suite.
Jul  7 11:59:29.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 11:59:29.966: INFO: namespace: e2e-tests-emptydir-5gtxs, resource: bindings, ignored listing per whitelist
Jul  7 11:59:29.973: INFO: namespace e2e-tests-emptydir-5gtxs deletion completed in 6.19319888s

• [SLOW TEST:10.881 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 11:59:29.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-4zcbv
Jul  7 11:59:36.368: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-4zcbv
STEP: checking the pod's current state and verifying that restartCount is present
Jul  7 11:59:36.371: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:03:36.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-4zcbv" for this suite.
Jul  7 12:03:42.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:03:43.031: INFO: namespace: e2e-tests-container-probe-4zcbv, resource: bindings, ignored listing per whitelist
Jul  7 12:03:43.062: INFO: namespace e2e-tests-container-probe-4zcbv deletion completed in 6.090777539s

• [SLOW TEST:253.089 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:03:43.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-e76c712c-c049-11ea-9ad7-0242ac11001b
STEP: Creating a pod to test consume secrets
Jul  7 12:03:43.830: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e77057d3-c049-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-hwlxt" to be "success or failure"
Jul  7 12:03:43.998: INFO: Pod "pod-projected-secrets-e77057d3-c049-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 167.150359ms
Jul  7 12:03:46.002: INFO: Pod "pod-projected-secrets-e77057d3-c049-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171145625s
Jul  7 12:03:48.079: INFO: Pod "pod-projected-secrets-e77057d3-c049-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.248299136s
Jul  7 12:03:50.082: INFO: Pod "pod-projected-secrets-e77057d3-c049-11ea-9ad7-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 6.251789377s
Jul  7 12:03:52.125: INFO: Pod "pod-projected-secrets-e77057d3-c049-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.294996184s
STEP: Saw pod success
Jul  7 12:03:52.125: INFO: Pod "pod-projected-secrets-e77057d3-c049-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 12:03:52.128: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-e77057d3-c049-11ea-9ad7-0242ac11001b container projected-secret-volume-test: 
STEP: delete the pod
Jul  7 12:03:52.399: INFO: Waiting for pod pod-projected-secrets-e77057d3-c049-11ea-9ad7-0242ac11001b to disappear
Jul  7 12:03:52.461: INFO: Pod pod-projected-secrets-e77057d3-c049-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:03:52.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hwlxt" for this suite.
Jul  7 12:03:58.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:03:58.942: INFO: namespace: e2e-tests-projected-hwlxt, resource: bindings, ignored listing per whitelist
Jul  7 12:03:58.949: INFO: namespace e2e-tests-projected-hwlxt deletion completed in 6.483854252s

• [SLOW TEST:15.887 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:03:58.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-lbb59
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul  7 12:03:59.230: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul  7 12:04:32.019: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.183:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-lbb59 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 12:04:32.019: INFO: >>> kubeConfig: /root/.kube/config
I0707 12:04:32.124052       6 log.go:172] (0xc001dc22c0) (0xc00283b9a0) Create stream
I0707 12:04:32.124099       6 log.go:172] (0xc001dc22c0) (0xc00283b9a0) Stream added, broadcasting: 1
I0707 12:04:32.127124       6 log.go:172] (0xc001dc22c0) Reply frame received for 1
I0707 12:04:32.127173       6 log.go:172] (0xc001dc22c0) (0xc0021a83c0) Create stream
I0707 12:04:32.127194       6 log.go:172] (0xc001dc22c0) (0xc0021a83c0) Stream added, broadcasting: 3
I0707 12:04:32.128187       6 log.go:172] (0xc001dc22c0) Reply frame received for 3
I0707 12:04:32.128225       6 log.go:172] (0xc001dc22c0) (0xc001a1e000) Create stream
I0707 12:04:32.128243       6 log.go:172] (0xc001dc22c0) (0xc001a1e000) Stream added, broadcasting: 5
I0707 12:04:32.129511       6 log.go:172] (0xc001dc22c0) Reply frame received for 5
I0707 12:04:32.191460       6 log.go:172] (0xc001dc22c0) Data frame received for 3
I0707 12:04:32.191508       6 log.go:172] (0xc001dc22c0) Data frame received for 5
I0707 12:04:32.191576       6 log.go:172] (0xc001a1e000) (5) Data frame handling
I0707 12:04:32.191615       6 log.go:172] (0xc0021a83c0) (3) Data frame handling
I0707 12:04:32.191678       6 log.go:172] (0xc0021a83c0) (3) Data frame sent
I0707 12:04:32.191711       6 log.go:172] (0xc001dc22c0) Data frame received for 3
I0707 12:04:32.191735       6 log.go:172] (0xc0021a83c0) (3) Data frame handling
I0707 12:04:32.193495       6 log.go:172] (0xc001dc22c0) Data frame received for 1
I0707 12:04:32.193515       6 log.go:172] (0xc00283b9a0) (1) Data frame handling
I0707 12:04:32.193531       6 log.go:172] (0xc00283b9a0) (1) Data frame sent
I0707 12:04:32.193566       6 log.go:172] (0xc001dc22c0) (0xc00283b9a0) Stream removed, broadcasting: 1
I0707 12:04:32.193588       6 log.go:172] (0xc001dc22c0) Go away received
I0707 12:04:32.193767       6 log.go:172] (0xc001dc22c0) (0xc00283b9a0) Stream removed, broadcasting: 1
I0707 12:04:32.193809       6 log.go:172] (0xc001dc22c0) (0xc0021a83c0) Stream removed, broadcasting: 3
I0707 12:04:32.193831       6 log.go:172] (0xc001dc22c0) (0xc001a1e000) Stream removed, broadcasting: 5
Jul  7 12:04:32.193: INFO: Found all expected endpoints: [netserver-0]
Jul  7 12:04:32.196: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.79:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-lbb59 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 12:04:32.197: INFO: >>> kubeConfig: /root/.kube/config
I0707 12:04:32.222620       6 log.go:172] (0xc0016ca2c0) (0xc00065b9a0) Create stream
I0707 12:04:32.222655       6 log.go:172] (0xc0016ca2c0) (0xc00065b9a0) Stream added, broadcasting: 1
I0707 12:04:32.226070       6 log.go:172] (0xc0016ca2c0) Reply frame received for 1
I0707 12:04:32.226131       6 log.go:172] (0xc0016ca2c0) (0xc0021a8460) Create stream
I0707 12:04:32.226163       6 log.go:172] (0xc0016ca2c0) (0xc0021a8460) Stream added, broadcasting: 3
I0707 12:04:32.227010       6 log.go:172] (0xc0016ca2c0) Reply frame received for 3
I0707 12:04:32.227048       6 log.go:172] (0xc0016ca2c0) (0xc0017646e0) Create stream
I0707 12:04:32.227061       6 log.go:172] (0xc0016ca2c0) (0xc0017646e0) Stream added, broadcasting: 5
I0707 12:04:32.227930       6 log.go:172] (0xc0016ca2c0) Reply frame received for 5
I0707 12:04:32.302092       6 log.go:172] (0xc0016ca2c0) Data frame received for 3
I0707 12:04:32.302142       6 log.go:172] (0xc0021a8460) (3) Data frame handling
I0707 12:04:32.302179       6 log.go:172] (0xc0021a8460) (3) Data frame sent
I0707 12:04:32.302383       6 log.go:172] (0xc0016ca2c0) Data frame received for 5
I0707 12:04:32.302418       6 log.go:172] (0xc0017646e0) (5) Data frame handling
I0707 12:04:32.302495       6 log.go:172] (0xc0016ca2c0) Data frame received for 3
I0707 12:04:32.302536       6 log.go:172] (0xc0021a8460) (3) Data frame handling
I0707 12:04:32.304278       6 log.go:172] (0xc0016ca2c0) Data frame received for 1
I0707 12:04:32.304329       6 log.go:172] (0xc00065b9a0) (1) Data frame handling
I0707 12:04:32.304361       6 log.go:172] (0xc00065b9a0) (1) Data frame sent
I0707 12:04:32.304441       6 log.go:172] (0xc0016ca2c0) (0xc00065b9a0) Stream removed, broadcasting: 1
I0707 12:04:32.304502       6 log.go:172] (0xc0016ca2c0) Go away received
I0707 12:04:32.304571       6 log.go:172] (0xc0016ca2c0) (0xc00065b9a0) Stream removed, broadcasting: 1
I0707 12:04:32.304602       6 log.go:172] (0xc0016ca2c0) (0xc0021a8460) Stream removed, broadcasting: 3
I0707 12:04:32.304620       6 log.go:172] (0xc0016ca2c0) (0xc0017646e0) Stream removed, broadcasting: 5
Jul  7 12:04:32.304: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:04:32.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-lbb59" for this suite.
Jul  7 12:04:58.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:04:58.618: INFO: namespace: e2e-tests-pod-network-test-lbb59, resource: bindings, ignored listing per whitelist
Jul  7 12:04:58.631: INFO: namespace e2e-tests-pod-network-test-lbb59 deletion completed in 26.32245294s

• [SLOW TEST:59.682 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:04:58.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  7 12:04:59.389: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jul  7 12:05:04.394: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul  7 12:05:06.401: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul  7 12:05:06.428: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-xwdb5,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-xwdb5/deployments/test-cleanup-deployment,UID:18c7b849-c04a-11ea-a300-0242ac110004,ResourceVersion:603680,Generation:1,CreationTimestamp:2020-07-07 12:05:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jul  7 12:05:06.430: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:05:06.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-xwdb5" for this suite.
Jul  7 12:05:14.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:05:14.496: INFO: namespace: e2e-tests-deployment-xwdb5, resource: bindings, ignored listing per whitelist
Jul  7 12:05:14.552: INFO: namespace e2e-tests-deployment-xwdb5 deletion completed in 8.109386001s

• [SLOW TEST:15.921 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:05:14.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jul  7 12:05:14.645: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  7 12:05:14.665: INFO: Waiting for terminating namespaces to be deleted...
Jul  7 12:05:14.668: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Jul  7 12:05:14.678: INFO: kindnet-mcn92 from kube-system started at 2020-07-04 07:47:46 +0000 UTC (1 container statuses recorded)
Jul  7 12:05:14.678: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  7 12:05:14.678: INFO: bono-9858d478b-59h5k from default started at 2020-07-07 12:03:05 +0000 UTC (2 container statuses recorded)
Jul  7 12:05:14.678: INFO: 	Container bono ready: false, restart count 0
Jul  7 12:05:14.678: INFO: 	Container tailer ready: false, restart count 0
Jul  7 12:05:14.678: INFO: kube-proxy-cqbm8 from kube-system started at 2020-07-04 07:47:44 +0000 UTC (1 container statuses recorded)
Jul  7 12:05:14.678: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  7 12:05:14.678: INFO: cassandra-6bbbb59c99-rlzgn from default started at 2020-07-07 12:03:05 +0000 UTC (1 container statuses recorded)
Jul  7 12:05:14.678: INFO: 	Container cassandra ready: false, restart count 0
Jul  7 12:05:14.678: INFO: homestead-574bb6776f-sj2n6 from default started at 2020-07-07 12:03:06 +0000 UTC (2 container statuses recorded)
Jul  7 12:05:14.678: INFO: 	Container homestead ready: false, restart count 0
Jul  7 12:05:14.678: INFO: 	Container tailer ready: false, restart count 0
Jul  7 12:05:14.678: INFO: ralf-7f4f5c54cc-6fsdw from default started at 2020-07-07 12:03:06 +0000 UTC (2 container statuses recorded)
Jul  7 12:05:14.678: INFO: 	Container ralf ready: false, restart count 0
Jul  7 12:05:14.678: INFO: 	Container tailer ready: false, restart count 0
Jul  7 12:05:14.678: INFO: ellis-85b9f775cb-x6672 from default started at 2020-07-07 12:03:06 +0000 UTC (1 container statuses recorded)
Jul  7 12:05:14.678: INFO: 	Container ellis ready: true, restart count 0
Jul  7 12:05:14.678: INFO: etcd-c8fddbb95-9k75k from default started at 2020-07-07 12:03:06 +0000 UTC (1 container statuses recorded)
Jul  7 12:05:14.678: INFO: 	Container etcd ready: true, restart count 0
Jul  7 12:05:14.678: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Jul  7 12:05:14.717: INFO: homestead-prov-bfd88f7dc-dj6jx from default started at 2020-07-07 12:03:06 +0000 UTC (1 container statuses recorded)
Jul  7 12:05:14.717: INFO: 	Container homestead-prov ready: false, restart count 0
Jul  7 12:05:14.717: INFO: coredns-54ff9cd656-mgg2q from kube-system started at 2020-07-04 07:48:14 +0000 UTC (1 container statuses recorded)
Jul  7 12:05:14.717: INFO: 	Container coredns ready: true, restart count 0
Jul  7 12:05:14.717: INFO: coredns-54ff9cd656-l7q92 from kube-system started at 2020-07-04 07:48:14 +0000 UTC (1 container statuses recorded)
Jul  7 12:05:14.717: INFO: 	Container coredns ready: true, restart count 0
Jul  7 12:05:14.717: INFO: astaire-dc9749bcf-q6x6j from default started at 2020-07-07 12:03:05 +0000 UTC (2 container statuses recorded)
Jul  7 12:05:14.717: INFO: 	Container astaire ready: true, restart count 0
Jul  7 12:05:14.717: INFO: 	Container tailer ready: true, restart count 0
Jul  7 12:05:14.717: INFO: homer-74999b8998-59lht from default started at 2020-07-07 12:03:05 +0000 UTC (1 container statuses recorded)
Jul  7 12:05:14.717: INFO: 	Container homer ready: false, restart count 0
Jul  7 12:05:14.717: INFO: kube-proxy-52vr2 from kube-system started at 2020-07-04 07:47:44 +0000 UTC (1 container statuses recorded)
Jul  7 12:05:14.717: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  7 12:05:14.717: INFO: kindnet-rll2b from kube-system started at 2020-07-04 07:47:46 +0000 UTC (1 container statuses recorded)
Jul  7 12:05:14.717: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  7 12:05:14.717: INFO: local-path-provisioner-674595c7-cvgpb from local-path-storage started at 2020-07-04 07:48:14 +0000 UTC (1 container statuses recorded)
Jul  7 12:05:14.717: INFO: 	Container local-path-provisioner ready: true, restart count 2
Jul  7 12:05:14.717: INFO: chronos-6c6b667457-t6k65 from default started at 2020-07-07 12:03:05 +0000 UTC (2 container statuses recorded)
Jul  7 12:05:14.717: INFO: 	Container chronos ready: true, restart count 0
Jul  7 12:05:14.717: INFO: 	Container tailer ready: true, restart count 0
Jul  7 12:05:14.717: INFO: sprout-675894d659-lllf2 from default started at 2020-07-07 12:03:06 +0000 UTC (2 container statuses recorded)
Jul  7 12:05:14.717: INFO: 	Container sprout ready: false, restart count 0
Jul  7 12:05:14.717: INFO: 	Container tailer ready: false, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-202532ae-c04a-11ea-9ad7-0242ac11001b 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-202532ae-c04a-11ea-9ad7-0242ac11001b off the node hunter-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-202532ae-c04a-11ea-9ad7-0242ac11001b
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:05:22.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-q4hn8" for this suite.
Jul  7 12:05:36.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:05:36.931: INFO: namespace: e2e-tests-sched-pred-q4hn8, resource: bindings, ignored listing per whitelist
Jul  7 12:05:36.931: INFO: namespace e2e-tests-sched-pred-q4hn8 deletion completed in 14.091725676s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:22.379 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:05:36.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul  7 12:05:37.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-vf959'
Jul  7 12:05:39.329: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  7 12:05:39.329: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jul  7 12:05:43.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-vf959'
Jul  7 12:05:43.453: INFO: stderr: ""
Jul  7 12:05:43.453: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:05:43.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vf959" for this suite.
Jul  7 12:05:49.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:05:49.531: INFO: namespace: e2e-tests-kubectl-vf959, resource: bindings, ignored listing per whitelist
Jul  7 12:05:49.584: INFO: namespace e2e-tests-kubectl-vf959 deletion completed in 6.077772405s

• [SLOW TEST:12.653 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:05:49.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:05:55.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-shzms" for this suite.
Jul  7 12:06:35.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:06:35.845: INFO: namespace: e2e-tests-kubelet-test-shzms, resource: bindings, ignored listing per whitelist
Jul  7 12:06:35.871: INFO: namespace e2e-tests-kubelet-test-shzms deletion completed in 40.103882284s

• [SLOW TEST:46.286 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:06:35.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-gzkgr
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-gzkgr to expose endpoints map[]
Jul  7 12:06:36.611: INFO: Get endpoints failed (54.817702ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jul  7 12:06:37.615: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-gzkgr exposes endpoints map[] (1.058435408s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-gzkgr
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-gzkgr to expose endpoints map[pod1:[80]]
Jul  7 12:06:41.817: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.192071337s elapsed, will retry)
Jul  7 12:06:43.862: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-gzkgr exposes endpoints map[pod1:[80]] (6.236646349s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-gzkgr
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-gzkgr to expose endpoints map[pod1:[80] pod2:[80]]
Jul  7 12:06:47.981: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-gzkgr exposes endpoints map[pod1:[80] pod2:[80]] (4.114390425s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-gzkgr
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-gzkgr to expose endpoints map[pod2:[80]]
Jul  7 12:06:49.280: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-gzkgr exposes endpoints map[pod2:[80]] (1.295090091s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-gzkgr
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-gzkgr to expose endpoints map[]
Jul  7 12:06:50.612: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-gzkgr exposes endpoints map[] (1.256174702s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:06:50.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-gzkgr" for this suite.
Jul  7 12:06:59.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:06:59.777: INFO: namespace: e2e-tests-services-gzkgr, resource: bindings, ignored listing per whitelist
Jul  7 12:07:00.257: INFO: namespace e2e-tests-services-gzkgr deletion completed in 8.840710886s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:24.386 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:07:00.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jul  7 12:07:00.555: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:07:17.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-jlfdf" for this suite.
Jul  7 12:07:43.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:07:43.772: INFO: namespace: e2e-tests-init-container-jlfdf, resource: bindings, ignored listing per whitelist
Jul  7 12:07:43.796: INFO: namespace e2e-tests-init-container-jlfdf deletion completed in 26.116543053s

• [SLOW TEST:43.538 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:07:43.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-7nrl4
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-7nrl4
STEP: Deleting pre-stop pod
Jul  7 12:08:00.992: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:08:00.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-7nrl4" for this suite.
Jul  7 12:08:41.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:08:41.139: INFO: namespace: e2e-tests-prestop-7nrl4, resource: bindings, ignored listing per whitelist
Jul  7 12:08:41.217: INFO: namespace e2e-tests-prestop-7nrl4 deletion completed in 40.213884025s

• [SLOW TEST:57.422 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:08:41.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jul  7 12:08:41.811: INFO: Waiting up to 5m0s for pod "pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-9v4rd" in namespace "e2e-tests-svcaccounts-rd6qh" to be "success or failure"
Jul  7 12:08:41.823: INFO: Pod "pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-9v4rd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.529505ms
Jul  7 12:08:44.082: INFO: Pod "pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-9v4rd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270851831s
Jul  7 12:08:46.086: INFO: Pod "pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-9v4rd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.275303869s
Jul  7 12:08:48.123: INFO: Pod "pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-9v4rd": Phase="Running", Reason="", readiness=false. Elapsed: 6.312052099s
Jul  7 12:08:50.127: INFO: Pod "pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-9v4rd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.315677057s
STEP: Saw pod success
Jul  7 12:08:50.127: INFO: Pod "pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-9v4rd" satisfied condition "success or failure"
Jul  7 12:08:50.130: INFO: Trying to get logs from node hunter-worker pod pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-9v4rd container token-test: 
STEP: delete the pod
Jul  7 12:08:50.160: INFO: Waiting for pod pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-9v4rd to disappear
Jul  7 12:08:50.163: INFO: Pod pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-9v4rd no longer exists
STEP: Creating a pod to test consume service account root CA
Jul  7 12:08:50.166: INFO: Waiting up to 5m0s for pod "pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-8rqgc" in namespace "e2e-tests-svcaccounts-rd6qh" to be "success or failure"
Jul  7 12:08:50.176: INFO: Pod "pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-8rqgc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.897553ms
Jul  7 12:08:52.218: INFO: Pod "pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-8rqgc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052688904s
Jul  7 12:08:54.380: INFO: Pod "pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-8rqgc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214863803s
Jul  7 12:08:56.483: INFO: Pod "pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-8rqgc": Phase="Running", Reason="", readiness=false. Elapsed: 6.317134482s
Jul  7 12:08:58.506: INFO: Pod "pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-8rqgc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.340360188s
STEP: Saw pod success
Jul  7 12:08:58.506: INFO: Pod "pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-8rqgc" satisfied condition "success or failure"
Jul  7 12:08:58.518: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-8rqgc container root-ca-test: 
STEP: delete the pod
Jul  7 12:08:58.550: INFO: Waiting for pod pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-8rqgc to disappear
Jul  7 12:08:58.566: INFO: Pod pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-8rqgc no longer exists
STEP: Creating a pod to test consume service account namespace
Jul  7 12:08:58.569: INFO: Waiting up to 5m0s for pod "pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-nlzf5" in namespace "e2e-tests-svcaccounts-rd6qh" to be "success or failure"
Jul  7 12:08:58.585: INFO: Pod "pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-nlzf5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.862066ms
Jul  7 12:09:00.656: INFO: Pod "pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-nlzf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08748262s
Jul  7 12:09:02.661: INFO: Pod "pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-nlzf5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092181908s
Jul  7 12:09:04.664: INFO: Pod "pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-nlzf5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095621668s
Jul  7 12:09:06.716: INFO: Pod "pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-nlzf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.147664097s
STEP: Saw pod success
Jul  7 12:09:06.716: INFO: Pod "pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-nlzf5" satisfied condition "success or failure"
Jul  7 12:09:06.720: INFO: Trying to get logs from node hunter-worker pod pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-nlzf5 container namespace-test: 
STEP: delete the pod
Jul  7 12:09:06.748: INFO: Waiting for pod pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-nlzf5 to disappear
Jul  7 12:09:06.762: INFO: Pod pod-service-account-992acd8c-c04a-11ea-9ad7-0242ac11001b-nlzf5 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:09:06.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-rd6qh" for this suite.
Jul  7 12:09:12.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:09:12.817: INFO: namespace: e2e-tests-svcaccounts-rd6qh, resource: bindings, ignored listing per whitelist
Jul  7 12:09:12.856: INFO: namespace e2e-tests-svcaccounts-rd6qh deletion completed in 6.089489906s

• [SLOW TEST:31.638 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:09:12.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-584w
STEP: Creating a pod to test atomic-volume-subpath
Jul  7 12:09:13.003: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-584w" in namespace "e2e-tests-subpath-sttc7" to be "success or failure"
Jul  7 12:09:13.008: INFO: Pod "pod-subpath-test-secret-584w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.441815ms
Jul  7 12:09:15.117: INFO: Pod "pod-subpath-test-secret-584w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114011275s
Jul  7 12:09:17.121: INFO: Pod "pod-subpath-test-secret-584w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118002915s
Jul  7 12:09:19.125: INFO: Pod "pod-subpath-test-secret-584w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122103119s
Jul  7 12:09:21.130: INFO: Pod "pod-subpath-test-secret-584w": Phase="Running", Reason="", readiness=false. Elapsed: 8.126746458s
Jul  7 12:09:23.134: INFO: Pod "pod-subpath-test-secret-584w": Phase="Running", Reason="", readiness=false. Elapsed: 10.130874564s
Jul  7 12:09:25.138: INFO: Pod "pod-subpath-test-secret-584w": Phase="Running", Reason="", readiness=false. Elapsed: 12.134744937s
Jul  7 12:09:27.142: INFO: Pod "pod-subpath-test-secret-584w": Phase="Running", Reason="", readiness=false. Elapsed: 14.138662591s
Jul  7 12:09:29.146: INFO: Pod "pod-subpath-test-secret-584w": Phase="Running", Reason="", readiness=false. Elapsed: 16.14270988s
Jul  7 12:09:31.150: INFO: Pod "pod-subpath-test-secret-584w": Phase="Running", Reason="", readiness=false. Elapsed: 18.146649182s
Jul  7 12:09:33.154: INFO: Pod "pod-subpath-test-secret-584w": Phase="Running", Reason="", readiness=false. Elapsed: 20.150600681s
Jul  7 12:09:35.158: INFO: Pod "pod-subpath-test-secret-584w": Phase="Running", Reason="", readiness=false. Elapsed: 22.155004644s
Jul  7 12:09:37.163: INFO: Pod "pod-subpath-test-secret-584w": Phase="Running", Reason="", readiness=false. Elapsed: 24.159705687s
Jul  7 12:09:39.195: INFO: Pod "pod-subpath-test-secret-584w": Phase="Running", Reason="", readiness=false. Elapsed: 26.191491664s
Jul  7 12:09:41.198: INFO: Pod "pod-subpath-test-secret-584w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.19453868s
STEP: Saw pod success
Jul  7 12:09:41.198: INFO: Pod "pod-subpath-test-secret-584w" satisfied condition "success or failure"
Jul  7 12:09:41.200: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-584w container test-container-subpath-secret-584w: 
STEP: delete the pod
Jul  7 12:09:41.227: INFO: Waiting for pod pod-subpath-test-secret-584w to disappear
Jul  7 12:09:41.236: INFO: Pod pod-subpath-test-secret-584w no longer exists
STEP: Deleting pod pod-subpath-test-secret-584w
Jul  7 12:09:41.236: INFO: Deleting pod "pod-subpath-test-secret-584w" in namespace "e2e-tests-subpath-sttc7"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:09:41.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-sttc7" for this suite.
Jul  7 12:09:47.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:09:47.264: INFO: namespace: e2e-tests-subpath-sttc7, resource: bindings, ignored listing per whitelist
Jul  7 12:09:47.324: INFO: namespace e2e-tests-subpath-sttc7 deletion completed in 6.082561812s

• [SLOW TEST:34.468 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:09:47.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  7 12:10:11.479: INFO: Container started at 2020-07-07 12:09:50 +0000 UTC, pod became ready at 2020-07-07 12:10:11 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:10:11.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-gjtq7" for this suite.
Jul  7 12:10:33.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:10:33.512: INFO: namespace: e2e-tests-container-probe-gjtq7, resource: bindings, ignored listing per whitelist
Jul  7 12:10:33.574: INFO: namespace e2e-tests-container-probe-gjtq7 deletion completed in 22.091446315s

• [SLOW TEST:46.250 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:10:33.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0707 12:10:43.794790       6 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  7 12:10:43.794: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:10:43.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-x77m2" for this suite.
Jul  7 12:10:49.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:10:49.864: INFO: namespace: e2e-tests-gc-x77m2, resource: bindings, ignored listing per whitelist
Jul  7 12:10:49.884: INFO: namespace e2e-tests-gc-x77m2 deletion completed in 6.087338774s

• [SLOW TEST:16.309 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:10:49.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jul  7 12:10:50.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pqb2p'
Jul  7 12:10:50.298: INFO: stderr: ""
Jul  7 12:10:50.299: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  7 12:10:50.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pqb2p'
Jul  7 12:10:50.422: INFO: stderr: ""
Jul  7 12:10:50.422: INFO: stdout: "update-demo-nautilus-gn6ht update-demo-nautilus-j88rj "
Jul  7 12:10:50.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gn6ht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pqb2p'
Jul  7 12:10:50.518: INFO: stderr: ""
Jul  7 12:10:50.518: INFO: stdout: ""
Jul  7 12:10:50.518: INFO: update-demo-nautilus-gn6ht is created but not running
Jul  7 12:10:55.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pqb2p'
Jul  7 12:10:55.611: INFO: stderr: ""
Jul  7 12:10:55.611: INFO: stdout: "update-demo-nautilus-gn6ht update-demo-nautilus-j88rj "
Jul  7 12:10:55.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gn6ht -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pqb2p'
Jul  7 12:10:55.700: INFO: stderr: ""
Jul  7 12:10:55.700: INFO: stdout: "true"
Jul  7 12:10:55.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gn6ht -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pqb2p'
Jul  7 12:10:55.787: INFO: stderr: ""
Jul  7 12:10:55.787: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  7 12:10:55.787: INFO: validating pod update-demo-nautilus-gn6ht
Jul  7 12:10:55.790: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  7 12:10:55.790: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  7 12:10:55.790: INFO: update-demo-nautilus-gn6ht is verified up and running
Jul  7 12:10:55.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j88rj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pqb2p'
Jul  7 12:10:55.889: INFO: stderr: ""
Jul  7 12:10:55.889: INFO: stdout: "true"
Jul  7 12:10:55.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j88rj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pqb2p'
Jul  7 12:10:56.004: INFO: stderr: ""
Jul  7 12:10:56.004: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  7 12:10:56.004: INFO: validating pod update-demo-nautilus-j88rj
Jul  7 12:10:56.008: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  7 12:10:56.009: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  7 12:10:56.009: INFO: update-demo-nautilus-j88rj is verified up and running
STEP: rolling-update to new replication controller
Jul  7 12:10:56.011: INFO: scanned /root for discovery docs: 
Jul  7 12:10:56.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-pqb2p'
Jul  7 12:11:18.717: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jul  7 12:11:18.717: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  7 12:11:18.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-pqb2p'
Jul  7 12:11:18.815: INFO: stderr: ""
Jul  7 12:11:18.815: INFO: stdout: "update-demo-kitten-g6jhv update-demo-kitten-sqk8t "
Jul  7 12:11:18.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-g6jhv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pqb2p'
Jul  7 12:11:18.928: INFO: stderr: ""
Jul  7 12:11:18.928: INFO: stdout: "true"
Jul  7 12:11:18.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-g6jhv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pqb2p'
Jul  7 12:11:19.033: INFO: stderr: ""
Jul  7 12:11:19.033: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jul  7 12:11:19.033: INFO: validating pod update-demo-kitten-g6jhv
Jul  7 12:11:19.037: INFO: got data: {
  "image": "kitten.jpg"
}

Jul  7 12:11:19.037: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jul  7 12:11:19.037: INFO: update-demo-kitten-g6jhv is verified up and running
Jul  7 12:11:19.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sqk8t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pqb2p'
Jul  7 12:11:19.139: INFO: stderr: ""
Jul  7 12:11:19.139: INFO: stdout: "true"
Jul  7 12:11:19.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sqk8t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-pqb2p'
Jul  7 12:11:19.240: INFO: stderr: ""
Jul  7 12:11:19.240: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jul  7 12:11:19.240: INFO: validating pod update-demo-kitten-sqk8t
Jul  7 12:11:19.245: INFO: got data: {
  "image": "kitten.jpg"
}

Jul  7 12:11:19.245: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jul  7 12:11:19.245: INFO: update-demo-kitten-sqk8t is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:11:19.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-pqb2p" for this suite.
Jul  7 12:11:43.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:11:43.356: INFO: namespace: e2e-tests-kubectl-pqb2p, resource: bindings, ignored listing per whitelist
Jul  7 12:11:43.359: INFO: namespace e2e-tests-kubectl-pqb2p deletion completed in 24.110427143s

• [SLOW TEST:53.475 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:11:43.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jul  7 12:11:43.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jul  7 12:11:43.694: INFO: stderr: ""
Jul  7 12:11:43.694: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:11:43.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-n4xcj" for this suite.
Jul  7 12:11:49.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:11:49.730: INFO: namespace: e2e-tests-kubectl-n4xcj, resource: bindings, ignored listing per whitelist
Jul  7 12:11:49.798: INFO: namespace e2e-tests-kubectl-n4xcj deletion completed in 6.100244399s

• [SLOW TEST:6.439 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:11:49.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jul  7 12:11:49.948: INFO: Waiting up to 5m0s for pod "downward-api-09470674-c04b-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-downward-api-rckjj" to be "success or failure"
Jul  7 12:11:49.959: INFO: Pod "downward-api-09470674-c04b-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.483873ms
Jul  7 12:11:51.963: INFO: Pod "downward-api-09470674-c04b-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01575547s
Jul  7 12:11:53.968: INFO: Pod "downward-api-09470674-c04b-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020597465s
STEP: Saw pod success
Jul  7 12:11:53.968: INFO: Pod "downward-api-09470674-c04b-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 12:11:53.972: INFO: Trying to get logs from node hunter-worker2 pod downward-api-09470674-c04b-11ea-9ad7-0242ac11001b container dapi-container: 
STEP: delete the pod
Jul  7 12:11:54.159: INFO: Waiting for pod downward-api-09470674-c04b-11ea-9ad7-0242ac11001b to disappear
Jul  7 12:11:54.175: INFO: Pod downward-api-09470674-c04b-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:11:54.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-rckjj" for this suite.
Jul  7 12:12:00.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:12:00.235: INFO: namespace: e2e-tests-downward-api-rckjj, resource: bindings, ignored listing per whitelist
Jul  7 12:12:00.266: INFO: namespace e2e-tests-downward-api-rckjj deletion completed in 6.087080817s

• [SLOW TEST:10.467 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:12:00.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul  7 12:12:00.421: INFO: Waiting up to 5m0s for pod "pod-0f8c709b-c04b-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-emptydir-8q2qs" to be "success or failure"
Jul  7 12:12:00.484: INFO: Pod "pod-0f8c709b-c04b-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 62.453464ms
Jul  7 12:12:02.488: INFO: Pod "pod-0f8c709b-c04b-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0670301s
Jul  7 12:12:04.491: INFO: Pod "pod-0f8c709b-c04b-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069845635s
STEP: Saw pod success
Jul  7 12:12:04.491: INFO: Pod "pod-0f8c709b-c04b-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 12:12:04.493: INFO: Trying to get logs from node hunter-worker pod pod-0f8c709b-c04b-11ea-9ad7-0242ac11001b container test-container: 
STEP: delete the pod
Jul  7 12:12:04.977: INFO: Waiting for pod pod-0f8c709b-c04b-11ea-9ad7-0242ac11001b to disappear
Jul  7 12:12:04.994: INFO: Pod pod-0f8c709b-c04b-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:12:04.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-8q2qs" for this suite.
Jul  7 12:12:11.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:12:11.108: INFO: namespace: e2e-tests-emptydir-8q2qs, resource: bindings, ignored listing per whitelist
Jul  7 12:12:11.143: INFO: namespace e2e-tests-emptydir-8q2qs deletion completed in 6.144452202s

• [SLOW TEST:10.877 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:12:11.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-16083fe2-c04b-11ea-9ad7-0242ac11001b
STEP: Creating a pod to test consume configMaps
Jul  7 12:12:11.302: INFO: Waiting up to 5m0s for pod "pod-configmaps-1608c221-c04b-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-configmap-wq8dc" to be "success or failure"
Jul  7 12:12:11.319: INFO: Pod "pod-configmaps-1608c221-c04b-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.121768ms
Jul  7 12:12:13.460: INFO: Pod "pod-configmaps-1608c221-c04b-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157595843s
Jul  7 12:12:15.463: INFO: Pod "pod-configmaps-1608c221-c04b-11ea-9ad7-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.161309219s
Jul  7 12:12:17.467: INFO: Pod "pod-configmaps-1608c221-c04b-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.164970948s
STEP: Saw pod success
Jul  7 12:12:17.467: INFO: Pod "pod-configmaps-1608c221-c04b-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 12:12:17.469: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-1608c221-c04b-11ea-9ad7-0242ac11001b container configmap-volume-test: 
STEP: delete the pod
Jul  7 12:12:17.602: INFO: Waiting for pod pod-configmaps-1608c221-c04b-11ea-9ad7-0242ac11001b to disappear
Jul  7 12:12:17.642: INFO: Pod pod-configmaps-1608c221-c04b-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:12:17.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-wq8dc" for this suite.
Jul  7 12:12:23.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:12:23.670: INFO: namespace: e2e-tests-configmap-wq8dc, resource: bindings, ignored listing per whitelist
Jul  7 12:12:23.723: INFO: namespace e2e-tests-configmap-wq8dc deletion completed in 6.077100855s

• [SLOW TEST:12.579 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:12:23.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  7 12:12:23.884: INFO: Creating deployment "nginx-deployment"
Jul  7 12:12:23.906: INFO: Waiting for observed generation 1
Jul  7 12:12:26.335: INFO: Waiting for all required pods to come up
Jul  7 12:12:26.338: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jul  7 12:12:38.347: INFO: Waiting for deployment "nginx-deployment" to complete
Jul  7 12:12:38.353: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jul  7 12:12:38.359: INFO: Updating deployment nginx-deployment
Jul  7 12:12:38.359: INFO: Waiting for observed generation 2
Jul  7 12:12:40.592: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jul  7 12:12:40.595: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jul  7 12:12:40.679: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jul  7 12:12:40.818: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jul  7 12:12:40.818: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jul  7 12:12:40.904: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jul  7 12:12:41.044: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jul  7 12:12:41.044: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jul  7 12:12:41.266: INFO: Updating deployment nginx-deployment
Jul  7 12:12:41.266: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jul  7 12:12:41.516: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jul  7 12:12:43.832: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul  7 12:12:44.057: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-74lv7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-74lv7/deployments/nginx-deployment,UID:1d899f47-c04b-11ea-a300-0242ac110004,ResourceVersion:605584,Generation:3,CreationTimestamp:2020-07-07 12:12:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-07-07 12:12:41 +0000 UTC 2020-07-07 12:12:41 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-07-07 12:12:41 +0000 UTC 2020-07-07 12:12:23 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Jul  7 12:12:44.094: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-74lv7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-74lv7/replicasets/nginx-deployment-5c98f8fb5,UID:262a578b-c04b-11ea-a300-0242ac110004,ResourceVersion:605573,Generation:3,CreationTimestamp:2020-07-07 12:12:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 1d899f47-c04b-11ea-a300-0242ac110004 0xc000df6377 0xc000df6378}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  7 12:12:44.094: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jul  7 12:12:44.095: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-74lv7,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-74lv7/replicasets/nginx-deployment-85ddf47c5d,UID:1d92bb7d-c04b-11ea-a300-0242ac110004,ResourceVersion:605579,Generation:3,CreationTimestamp:2020-07-07 12:12:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 1d899f47-c04b-11ea-a300-0242ac110004 0xc000df6497 0xc000df6498}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jul  7 12:12:45.116: INFO: Pod "nginx-deployment-5c98f8fb5-46ncp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-46ncp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-5c98f8fb5-46ncp,UID:2822dd44-c04b-11ea-a300-0242ac110004,ResourceVersion:605557,Generation:0,CreationTimestamp:2020-07-07 12:12:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 262a578b-c04b-11ea-a300-0242ac110004 0xc001d6d147 0xc001d6d148}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d6d230} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d6d250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.117: INFO: Pod "nginx-deployment-5c98f8fb5-72bcf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-72bcf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-5c98f8fb5-72bcf,UID:282bce45-c04b-11ea-a300-0242ac110004,ResourceVersion:605564,Generation:0,CreationTimestamp:2020-07-07 12:12:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 262a578b-c04b-11ea-a300-0242ac110004 0xc001d6d2c0 0xc001d6d2c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d6d480} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d6d4a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.117: INFO: Pod "nginx-deployment-5c98f8fb5-b2n7k" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-b2n7k,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-5c98f8fb5-b2n7k,UID:2822d69c-c04b-11ea-a300-0242ac110004,ResourceVersion:605555,Generation:0,CreationTimestamp:2020-07-07 12:12:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 262a578b-c04b-11ea-a300-0242ac110004 0xc001d6d510 0xc001d6d511}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d6d590} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d6d640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.117: INFO: Pod "nginx-deployment-5c98f8fb5-bg5cq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bg5cq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-5c98f8fb5-bg5cq,UID:280b8921-c04b-11ea-a300-0242ac110004,ResourceVersion:605604,Generation:0,CreationTimestamp:2020-07-07 12:12:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 262a578b-c04b-11ea-a300-0242ac110004 0xc001d6d6b0 0xc001d6d6b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d6d730} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d6d750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-07-07 12:12:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.117: INFO: Pod "nginx-deployment-5c98f8fb5-c49c2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-c49c2,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-5c98f8fb5-c49c2,UID:265567e9-c04b-11ea-a300-0242ac110004,ResourceVersion:605504,Generation:0,CreationTimestamp:2020-07-07 12:12:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 262a578b-c04b-11ea-a300-0242ac110004 0xc001d6d880 0xc001d6d881}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d6d900} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d6d920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:38 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-07 12:12:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.117: INFO: Pod "nginx-deployment-5c98f8fb5-h9xr8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-h9xr8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-5c98f8fb5-h9xr8,UID:2633be18-c04b-11ea-a300-0242ac110004,ResourceVersion:605496,Generation:0,CreationTimestamp:2020-07-07 12:12:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 262a578b-c04b-11ea-a300-0242ac110004 0xc001d6d9e0 0xc001d6d9e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d6da60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d6da80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:38 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-07 12:12:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.118: INFO: Pod "nginx-deployment-5c98f8fb5-kf2bk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kf2bk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-5c98f8fb5-kf2bk,UID:2633b2fa-c04b-11ea-a300-0242ac110004,ResourceVersion:605486,Generation:0,CreationTimestamp:2020-07-07 12:12:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 262a578b-c04b-11ea-a300-0242ac110004 0xc001d6db40 0xc001d6db41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d6dbc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d6dbe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:38 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-07-07 12:12:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.118: INFO: Pod "nginx-deployment-5c98f8fb5-n9zhp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-n9zhp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-5c98f8fb5-n9zhp,UID:280ba984-c04b-11ea-a300-0242ac110004,ResourceVersion:605622,Generation:0,CreationTimestamp:2020-07-07 12:12:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 262a578b-c04b-11ea-a300-0242ac110004 0xc001d6dca0 0xc001d6dca1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d6dd20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d6dd40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-07 12:12:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.118: INFO: Pod "nginx-deployment-5c98f8fb5-pnrhq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-pnrhq,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-5c98f8fb5-pnrhq,UID:262bb6b3-c04b-11ea-a300-0242ac110004,ResourceVersion:605480,Generation:0,CreationTimestamp:2020-07-07 12:12:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 262a578b-c04b-11ea-a300-0242ac110004 0xc001d6de00 0xc001d6de01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d6de80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d6dea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:38 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:38 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-07 12:12:38 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.118: INFO: Pod "nginx-deployment-5c98f8fb5-qf78f" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qf78f,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-5c98f8fb5-qf78f,UID:2822e59e-c04b-11ea-a300-0242ac110004,ResourceVersion:605560,Generation:0,CreationTimestamp:2020-07-07 12:12:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 262a578b-c04b-11ea-a300-0242ac110004 0xc001d6df60 0xc001d6df61}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d6dfe0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024aa000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.118: INFO: Pod "nginx-deployment-5c98f8fb5-qvkpl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-qvkpl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-5c98f8fb5-qvkpl,UID:264fec53-c04b-11ea-a300-0242ac110004,ResourceVersion:605502,Generation:0,CreationTimestamp:2020-07-07 12:12:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 262a578b-c04b-11ea-a300-0242ac110004 0xc0024aa070 0xc0024aa071}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024aa170} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024aa190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:39 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:38 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-07-07 12:12:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.118: INFO: Pod "nginx-deployment-5c98f8fb5-r6c76" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-r6c76,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-5c98f8fb5-r6c76,UID:280765de-c04b-11ea-a300-0242ac110004,ResourceVersion:605572,Generation:0,CreationTimestamp:2020-07-07 12:12:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 262a578b-c04b-11ea-a300-0242ac110004 0xc0024aa250 0xc0024aa251}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024aa340} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024aa360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-07-07 12:12:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.118: INFO: Pod "nginx-deployment-5c98f8fb5-sl5dt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-sl5dt,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-5c98f8fb5-sl5dt,UID:2822d0dd-c04b-11ea-a300-0242ac110004,ResourceVersion:605618,Generation:0,CreationTimestamp:2020-07-07 12:12:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 262a578b-c04b-11ea-a300-0242ac110004 0xc0024aa490 0xc0024aa491}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024aa510} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024aa530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-07-07 12:12:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.118: INFO: Pod "nginx-deployment-85ddf47c5d-59x6w" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-59x6w,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-85ddf47c5d-59x6w,UID:1da2b819-c04b-11ea-a300-0242ac110004,ResourceVersion:605445,Generation:0,CreationTimestamp:2020-07-07 12:12:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d92bb7d-c04b-11ea-a300-0242ac110004 0xc0024aa5f0 0xc0024aa5f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024aa6f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024aa710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:24 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.2.197,StartTime:2020-07-07 12:12:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-07 12:12:35 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c87bfd82b9c942084be2ce5e8b37d0b2ea92bfd7a2bf69220ed58e8c43142d20}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.119: INFO: Pod "nginx-deployment-85ddf47c5d-657x9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-657x9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-85ddf47c5d-657x9,UID:280b8ca5-c04b-11ea-a300-0242ac110004,ResourceVersion:605599,Generation:0,CreationTimestamp:2020-07-07 12:12:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d92bb7d-c04b-11ea-a300-0242ac110004 0xc0024aa8c0 0xc0024aa8c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024aa930} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024aa950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-07 12:12:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.119: INFO: Pod "nginx-deployment-85ddf47c5d-65z7x" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-65z7x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-85ddf47c5d-65z7x,UID:1d954ddb-c04b-11ea-a300-0242ac110004,ResourceVersion:605388,Generation:0,CreationTimestamp:2020-07-07 12:12:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d92bb7d-c04b-11ea-a300-0242ac110004 0xc0024aaa80 0xc0024aaa81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024aaaf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024aab10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:24 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.1.96,StartTime:2020-07-07 12:12:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-07 12:12:29 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://527d60458df640cd471e9a9c2f198124e29cc9d210e5c710764be83d3e757bba}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.119: INFO: Pod "nginx-deployment-85ddf47c5d-6dslt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6dslt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-85ddf47c5d-6dslt,UID:2822f02a-c04b-11ea-a300-0242ac110004,ResourceVersion:605562,Generation:0,CreationTimestamp:2020-07-07 12:12:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d92bb7d-c04b-11ea-a300-0242ac110004 0xc0024aae30 0xc0024aae31}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024ab570} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024ab590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.119: INFO: Pod "nginx-deployment-85ddf47c5d-928kz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-928kz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-85ddf47c5d-928kz,UID:2822ea37-c04b-11ea-a300-0242ac110004,ResourceVersion:605558,Generation:0,CreationTimestamp:2020-07-07 12:12:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d92bb7d-c04b-11ea-a300-0242ac110004 0xc0024ab600 0xc0024ab601}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024ab670} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024ab6e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.119: INFO: Pod "nginx-deployment-85ddf47c5d-9895t" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9895t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-85ddf47c5d-9895t,UID:1dada62e-c04b-11ea-a300-0242ac110004,ResourceVersion:605432,Generation:0,CreationTimestamp:2020-07-07 12:12:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d92bb7d-c04b-11ea-a300-0242ac110004 0xc0024aba70 0xc0024aba71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024abae0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024abb00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:35 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:35 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:24 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.1.100,StartTime:2020-07-07 12:12:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-07 12:12:34 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://cb4af3b0885bdf0cf1e8fa4cf662bd1272aedcdb6ecac00c4ea242ec3a0bd8ff}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.119: INFO: Pod "nginx-deployment-85ddf47c5d-9cz7s" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9cz7s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-85ddf47c5d-9cz7s,UID:280b8f72-c04b-11ea-a300-0242ac110004,ResourceVersion:605550,Generation:0,CreationTimestamp:2020-07-07 12:12:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d92bb7d-c04b-11ea-a300-0242ac110004 0xc0024abc70 0xc0024abc71}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024abd50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024abe20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.119: INFO: Pod "nginx-deployment-85ddf47c5d-glg68" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-glg68,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-85ddf47c5d-glg68,UID:2822de84-c04b-11ea-a300-0242ac110004,ResourceVersion:605559,Generation:0,CreationTimestamp:2020-07-07 12:12:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d92bb7d-c04b-11ea-a300-0242ac110004 0xc0024abe90 0xc0024abe91}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024abf00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024abf20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.119: INFO: Pod "nginx-deployment-85ddf47c5d-hw62t" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hw62t,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-85ddf47c5d-hw62t,UID:1da3f6cc-c04b-11ea-a300-0242ac110004,ResourceVersion:605406,Generation:0,CreationTimestamp:2020-07-07 12:12:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d92bb7d-c04b-11ea-a300-0242ac110004 0xc001eba010 0xc001eba011}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eba0b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eba0d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:24 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.2.195,StartTime:2020-07-07 12:12:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-07 12:12:32 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://95a6ad5451ca655ffe700006025e3aa3defbac7cc3e14a818fc0dd713db74c83}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.120: INFO: Pod "nginx-deployment-85ddf47c5d-kbhr5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kbhr5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-85ddf47c5d-kbhr5,UID:2822d95e-c04b-11ea-a300-0242ac110004,ResourceVersion:605563,Generation:0,CreationTimestamp:2020-07-07 12:12:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d92bb7d-c04b-11ea-a300-0242ac110004 0xc001eba3f0 0xc001eba3f1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eba460} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eba480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.120: INFO: Pod "nginx-deployment-85ddf47c5d-khtvz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-khtvz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-85ddf47c5d-khtvz,UID:1dadb24c-c04b-11ea-a300-0242ac110004,ResourceVersion:605423,Generation:0,CreationTimestamp:2020-07-07 12:12:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d92bb7d-c04b-11ea-a300-0242ac110004 0xc001eba5c0 0xc001eba5c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eba630} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eba660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:34 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:34 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:24 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.1.99,StartTime:2020-07-07 12:12:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-07 12:12:33 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6d8bae5fc9551cb2928f131b88854d0f6535241be3f0d04c44e884ad0013118f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.120: INFO: Pod "nginx-deployment-85ddf47c5d-n5q5k" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-n5q5k,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-85ddf47c5d-n5q5k,UID:1da3ea07-c04b-11ea-a300-0242ac110004,ResourceVersion:605420,Generation:0,CreationTimestamp:2020-07-07 12:12:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d92bb7d-c04b-11ea-a300-0242ac110004 0xc001eba750 0xc001eba751}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001eba7c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001eba7e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:34 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:34 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:24 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.1.98,StartTime:2020-07-07 12:12:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-07 12:12:33 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://30508806cbc006068faa0947f92fdb12a7e9e84a6befb2482c26cab9e2ae754c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.120: INFO: Pod "nginx-deployment-85ddf47c5d-nhxgs" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nhxgs,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-85ddf47c5d-nhxgs,UID:1dadb43b-c04b-11ea-a300-0242ac110004,ResourceVersion:605425,Generation:0,CreationTimestamp:2020-07-07 12:12:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d92bb7d-c04b-11ea-a300-0242ac110004 0xc001ebaad0 0xc001ebaad1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ebac30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ebad20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:34 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:34 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:24 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:10.244.2.196,StartTime:2020-07-07 12:12:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-07 12:12:33 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b42e1ea95e6b3b96029cadd7a5f285efd59d7687f89137e567121cab9442d8a0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.120: INFO: Pod "nginx-deployment-85ddf47c5d-nttz8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nttz8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-85ddf47c5d-nttz8,UID:1da3cf05-c04b-11ea-a300-0242ac110004,ResourceVersion:605401,Generation:0,CreationTimestamp:2020-07-07 12:12:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d92bb7d-c04b-11ea-a300-0242ac110004 0xc001ebaeb0 0xc001ebaeb1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ebaf60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ebaf80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:24 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.1.97,StartTime:2020-07-07 12:12:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-07 12:12:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://35064443be5722e64a7d821f8cca9dc66714a75615ef706489ce215250be1c8f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.120: INFO: Pod "nginx-deployment-85ddf47c5d-rrjvk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rrjvk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-85ddf47c5d-rrjvk,UID:28077726-c04b-11ea-a300-0242ac110004,ResourceVersion:605578,Generation:0,CreationTimestamp:2020-07-07 12:12:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d92bb7d-c04b-11ea-a300-0242ac110004 0xc001ebb0e0 0xc001ebb0e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ebb150} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ebb170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-07 12:12:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.120: INFO: Pod "nginx-deployment-85ddf47c5d-scj46" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-scj46,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-85ddf47c5d-scj46,UID:280b91e9-c04b-11ea-a300-0242ac110004,ResourceVersion:605591,Generation:0,CreationTimestamp:2020-07-07 12:12:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d92bb7d-c04b-11ea-a300-0242ac110004 0xc001ebb260 0xc001ebb261}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ebb2d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ebb2f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-07-07 12:12:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.120: INFO: Pod "nginx-deployment-85ddf47c5d-shprz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-shprz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-85ddf47c5d-shprz,UID:280b7a0b-c04b-11ea-a300-0242ac110004,ResourceVersion:605581,Generation:0,CreationTimestamp:2020-07-07 12:12:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d92bb7d-c04b-11ea-a300-0242ac110004 0xc001ebb3c0 0xc001ebb3c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ebb430} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ebb490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-07-07 12:12:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.121: INFO: Pod "nginx-deployment-85ddf47c5d-txjsb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-txjsb,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-85ddf47c5d-txjsb,UID:27ec4538-c04b-11ea-a300-0242ac110004,ResourceVersion:605568,Generation:0,CreationTimestamp:2020-07-07 12:12:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d92bb7d-c04b-11ea-a300-0242ac110004 0xc001ebb550 0xc001ebb551}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ebb5c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ebb5e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-07 12:12:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.121: INFO: Pod "nginx-deployment-85ddf47c5d-tz9h8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tz9h8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-85ddf47c5d-tz9h8,UID:2822f0be-c04b-11ea-a300-0242ac110004,ResourceVersion:605561,Generation:0,CreationTimestamp:2020-07-07 12:12:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d92bb7d-c04b-11ea-a300-0242ac110004 0xc001ebb6c0 0xc001ebb6c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ebb740} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ebb770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  7 12:12:45.121: INFO: Pod "nginx-deployment-85ddf47c5d-wj4c6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wj4c6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-74lv7,SelfLink:/api/v1/namespaces/e2e-tests-deployment-74lv7/pods/nginx-deployment-85ddf47c5d-wj4c6,UID:2807767b-c04b-11ea-a300-0242ac110004,ResourceVersion:605585,Generation:0,CreationTimestamp:2020-07-07 12:12:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 1d92bb7d-c04b-11ea-a300-0242ac110004 0xc001ebb7e0 0xc001ebb7e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6xfhw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6xfhw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6xfhw true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ebb930} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ebb950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:12:41 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.3,PodIP:,StartTime:2020-07-07 12:12:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:12:45.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-74lv7" for this suite.
Jul  7 12:13:19.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:13:19.157: INFO: namespace: e2e-tests-deployment-74lv7, resource: bindings, ignored listing per whitelist
Jul  7 12:13:19.218: INFO: namespace e2e-tests-deployment-74lv7 deletion completed in 33.139434379s

• [SLOW TEST:55.495 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:13:19.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:13:25.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-dcq6n" for this suite.
Jul  7 12:13:31.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:13:31.616: INFO: namespace: e2e-tests-namespaces-dcq6n, resource: bindings, ignored listing per whitelist
Jul  7 12:13:31.620: INFO: namespace e2e-tests-namespaces-dcq6n deletion completed in 6.080420339s
STEP: Destroying namespace "e2e-tests-nsdeletetest-4b72z" for this suite.
Jul  7 12:13:31.623: INFO: Namespace e2e-tests-nsdeletetest-4b72z was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-7xnws" for this suite.
Jul  7 12:13:37.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:13:37.705: INFO: namespace: e2e-tests-nsdeletetest-7xnws, resource: bindings, ignored listing per whitelist
Jul  7 12:13:37.730: INFO: namespace e2e-tests-nsdeletetest-7xnws deletion completed in 6.107158997s

• [SLOW TEST:18.512 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:13:37.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jul  7 12:13:38.628: INFO: Pod name wrapped-volume-race-4a0319ca-c04b-11ea-9ad7-0242ac11001b: Found 0 pods out of 5
Jul  7 12:13:43.635: INFO: Pod name wrapped-volume-race-4a0319ca-c04b-11ea-9ad7-0242ac11001b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-4a0319ca-c04b-11ea-9ad7-0242ac11001b in namespace e2e-tests-emptydir-wrapper-q89lt, will wait for the garbage collector to delete the pods
Jul  7 12:15:49.716: INFO: Deleting ReplicationController wrapped-volume-race-4a0319ca-c04b-11ea-9ad7-0242ac11001b took: 6.727634ms
Jul  7 12:15:49.916: INFO: Terminating ReplicationController wrapped-volume-race-4a0319ca-c04b-11ea-9ad7-0242ac11001b pods took: 200.2106ms
STEP: Creating RC which spawns configmap-volume pods
Jul  7 12:16:35.003: INFO: Pod name wrapped-volume-race-b329f617-c04b-11ea-9ad7-0242ac11001b: Found 0 pods out of 5
Jul  7 12:16:40.009: INFO: Pod name wrapped-volume-race-b329f617-c04b-11ea-9ad7-0242ac11001b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-b329f617-c04b-11ea-9ad7-0242ac11001b in namespace e2e-tests-emptydir-wrapper-q89lt, will wait for the garbage collector to delete the pods
Jul  7 12:19:22.568: INFO: Deleting ReplicationController wrapped-volume-race-b329f617-c04b-11ea-9ad7-0242ac11001b took: 51.506378ms
Jul  7 12:19:22.669: INFO: Terminating ReplicationController wrapped-volume-race-b329f617-c04b-11ea-9ad7-0242ac11001b pods took: 100.394492ms
STEP: Creating RC which spawns configmap-volume pods
Jul  7 12:20:04.912: INFO: Pod name wrapped-volume-race-304e2227-c04c-11ea-9ad7-0242ac11001b: Found 0 pods out of 5
Jul  7 12:20:09.918: INFO: Pod name wrapped-volume-race-304e2227-c04c-11ea-9ad7-0242ac11001b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-304e2227-c04c-11ea-9ad7-0242ac11001b in namespace e2e-tests-emptydir-wrapper-q89lt, will wait for the garbage collector to delete the pods
Jul  7 12:22:49.998: INFO: Deleting ReplicationController wrapped-volume-race-304e2227-c04c-11ea-9ad7-0242ac11001b took: 6.577093ms
Jul  7 12:22:50.099: INFO: Terminating ReplicationController wrapped-volume-race-304e2227-c04c-11ea-9ad7-0242ac11001b pods took: 100.2112ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:23:34.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-q89lt" for this suite.
Jul  7 12:23:43.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:23:43.093: INFO: namespace: e2e-tests-emptydir-wrapper-q89lt, resource: bindings, ignored listing per whitelist
Jul  7 12:23:43.154: INFO: namespace e2e-tests-emptydir-wrapper-q89lt deletion completed in 8.111841085s

• [SLOW TEST:605.423 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:23:43.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-b2769178-c04c-11ea-9ad7-0242ac11001b
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-b2769178-c04c-11ea-9ad7-0242ac11001b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:23:49.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-rtsbk" for this suite.
Jul  7 12:24:11.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:24:11.436: INFO: namespace: e2e-tests-configmap-rtsbk, resource: bindings, ignored listing per whitelist
Jul  7 12:24:11.507: INFO: namespace e2e-tests-configmap-rtsbk deletion completed in 22.124142365s

• [SLOW TEST:28.353 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:24:11.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jul  7 12:24:15.814: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:24:39.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-w65s8" for this suite.
Jul  7 12:24:45.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:24:45.938: INFO: namespace: e2e-tests-namespaces-w65s8, resource: bindings, ignored listing per whitelist
Jul  7 12:24:46.019: INFO: namespace e2e-tests-namespaces-w65s8 deletion completed in 6.109301938s
STEP: Destroying namespace "e2e-tests-nsdeletetest-7ntct" for this suite.
Jul  7 12:24:46.022: INFO: Namespace e2e-tests-nsdeletetest-7ntct was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-75d54" for this suite.
Jul  7 12:24:52.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:24:52.066: INFO: namespace: e2e-tests-nsdeletetest-75d54, resource: bindings, ignored listing per whitelist
Jul  7 12:24:52.109: INFO: namespace e2e-tests-nsdeletetest-75d54 deletion completed in 6.086926187s

• [SLOW TEST:40.601 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:24:52.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jul  7 12:24:56.786: INFO: Successfully updated pod "annotationupdatedb930b8a-c04c-11ea-9ad7-0242ac11001b"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:24:58.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-fnvvz" for this suite.
Jul  7 12:25:20.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:25:20.942: INFO: namespace: e2e-tests-projected-fnvvz, resource: bindings, ignored listing per whitelist
Jul  7 12:25:20.945: INFO: namespace e2e-tests-projected-fnvvz deletion completed in 22.132489897s

• [SLOW TEST:28.836 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:25:20.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul  7 12:25:29.159: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 12:25:29.163: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  7 12:25:31.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 12:25:31.169: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  7 12:25:33.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 12:25:33.168: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  7 12:25:35.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 12:25:35.167: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  7 12:25:37.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 12:25:37.204: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  7 12:25:39.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 12:25:39.168: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  7 12:25:41.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 12:25:41.167: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  7 12:25:43.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 12:25:43.168: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  7 12:25:45.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 12:25:45.168: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  7 12:25:47.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 12:25:47.169: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  7 12:25:49.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 12:25:49.167: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  7 12:25:51.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 12:25:51.168: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  7 12:25:53.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 12:25:53.167: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  7 12:25:55.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  7 12:25:55.168: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:25:55.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-k7ffl" for this suite.
Jul  7 12:26:23.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:26:23.249: INFO: namespace: e2e-tests-container-lifecycle-hook-k7ffl, resource: bindings, ignored listing per whitelist
Jul  7 12:26:26.361: INFO: namespace e2e-tests-container-lifecycle-hook-k7ffl deletion completed in 31.187753931s

• [SLOW TEST:65.416 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:26:26.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul  7 12:26:26.891: INFO: Waiting up to 5m0s for pod "pod-13fb8619-c04d-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-emptydir-4dbdv" to be "success or failure"
Jul  7 12:26:26.901: INFO: Pod "pod-13fb8619-c04d-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.150673ms
Jul  7 12:26:28.929: INFO: Pod "pod-13fb8619-c04d-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037922676s
Jul  7 12:26:30.933: INFO: Pod "pod-13fb8619-c04d-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042300066s
Jul  7 12:26:32.938: INFO: Pod "pod-13fb8619-c04d-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046688551s
Jul  7 12:26:34.942: INFO: Pod "pod-13fb8619-c04d-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050760523s
STEP: Saw pod success
Jul  7 12:26:34.942: INFO: Pod "pod-13fb8619-c04d-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 12:26:34.945: INFO: Trying to get logs from node hunter-worker pod pod-13fb8619-c04d-11ea-9ad7-0242ac11001b container test-container: 
STEP: delete the pod
Jul  7 12:26:35.007: INFO: Waiting for pod pod-13fb8619-c04d-11ea-9ad7-0242ac11001b to disappear
Jul  7 12:26:35.014: INFO: Pod pod-13fb8619-c04d-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:26:35.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-4dbdv" for this suite.
Jul  7 12:26:41.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:26:41.047: INFO: namespace: e2e-tests-emptydir-4dbdv, resource: bindings, ignored listing per whitelist
Jul  7 12:26:41.088: INFO: namespace e2e-tests-emptydir-4dbdv deletion completed in 6.069216876s

• [SLOW TEST:14.726 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:26:41.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-58gf
STEP: Creating a pod to test atomic-volume-subpath
Jul  7 12:26:41.251: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-58gf" in namespace "e2e-tests-subpath-fqprk" to be "success or failure"
Jul  7 12:26:41.318: INFO: Pod "pod-subpath-test-projected-58gf": Phase="Pending", Reason="", readiness=false. Elapsed: 67.614289ms
Jul  7 12:26:43.346: INFO: Pod "pod-subpath-test-projected-58gf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095737457s
Jul  7 12:26:45.350: INFO: Pod "pod-subpath-test-projected-58gf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099776593s
Jul  7 12:26:47.354: INFO: Pod "pod-subpath-test-projected-58gf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103833119s
Jul  7 12:26:49.359: INFO: Pod "pod-subpath-test-projected-58gf": Phase="Running", Reason="", readiness=false. Elapsed: 8.108239074s
Jul  7 12:26:51.363: INFO: Pod "pod-subpath-test-projected-58gf": Phase="Running", Reason="", readiness=false. Elapsed: 10.112664008s
Jul  7 12:26:53.368: INFO: Pod "pod-subpath-test-projected-58gf": Phase="Running", Reason="", readiness=false. Elapsed: 12.116990665s
Jul  7 12:26:55.371: INFO: Pod "pod-subpath-test-projected-58gf": Phase="Running", Reason="", readiness=false. Elapsed: 14.120825229s
Jul  7 12:26:57.375: INFO: Pod "pod-subpath-test-projected-58gf": Phase="Running", Reason="", readiness=false. Elapsed: 16.124728737s
Jul  7 12:26:59.380: INFO: Pod "pod-subpath-test-projected-58gf": Phase="Running", Reason="", readiness=false. Elapsed: 18.129640297s
Jul  7 12:27:01.385: INFO: Pod "pod-subpath-test-projected-58gf": Phase="Running", Reason="", readiness=false. Elapsed: 20.134609494s
Jul  7 12:27:03.388: INFO: Pod "pod-subpath-test-projected-58gf": Phase="Running", Reason="", readiness=false. Elapsed: 22.137506792s
Jul  7 12:27:05.392: INFO: Pod "pod-subpath-test-projected-58gf": Phase="Running", Reason="", readiness=false. Elapsed: 24.141507457s
Jul  7 12:27:07.396: INFO: Pod "pod-subpath-test-projected-58gf": Phase="Running", Reason="", readiness=false. Elapsed: 26.145266139s
Jul  7 12:27:09.400: INFO: Pod "pod-subpath-test-projected-58gf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.148981805s
STEP: Saw pod success
Jul  7 12:27:09.400: INFO: Pod "pod-subpath-test-projected-58gf" satisfied condition "success or failure"
Jul  7 12:27:09.402: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-58gf container test-container-subpath-projected-58gf: 
STEP: delete the pod
Jul  7 12:27:09.541: INFO: Waiting for pod pod-subpath-test-projected-58gf to disappear
Jul  7 12:27:09.554: INFO: Pod pod-subpath-test-projected-58gf no longer exists
STEP: Deleting pod pod-subpath-test-projected-58gf
Jul  7 12:27:09.554: INFO: Deleting pod "pod-subpath-test-projected-58gf" in namespace "e2e-tests-subpath-fqprk"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:27:09.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-fqprk" for this suite.
Jul  7 12:27:15.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:27:15.601: INFO: namespace: e2e-tests-subpath-fqprk, resource: bindings, ignored listing per whitelist
Jul  7 12:27:15.670: INFO: namespace e2e-tests-subpath-fqprk deletion completed in 6.110990316s

• [SLOW TEST:34.582 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:27:15.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-lsh4w
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul  7 12:27:15.761: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul  7 12:27:41.947: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.244:8080/dial?request=hostName&protocol=udp&host=10.244.2.243&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-lsh4w PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 12:27:41.947: INFO: >>> kubeConfig: /root/.kube/config
I0707 12:27:41.971377       6 log.go:172] (0xc000947ef0) (0xc000de1720) Create stream
I0707 12:27:41.971406       6 log.go:172] (0xc000947ef0) (0xc000de1720) Stream added, broadcasting: 1
I0707 12:27:41.973646       6 log.go:172] (0xc000947ef0) Reply frame received for 1
I0707 12:27:41.973692       6 log.go:172] (0xc000947ef0) (0xc0026b08c0) Create stream
I0707 12:27:41.973707       6 log.go:172] (0xc000947ef0) (0xc0026b08c0) Stream added, broadcasting: 3
I0707 12:27:41.974633       6 log.go:172] (0xc000947ef0) Reply frame received for 3
I0707 12:27:41.974669       6 log.go:172] (0xc000947ef0) (0xc00149bc20) Create stream
I0707 12:27:41.974680       6 log.go:172] (0xc000947ef0) (0xc00149bc20) Stream added, broadcasting: 5
I0707 12:27:41.975583       6 log.go:172] (0xc000947ef0) Reply frame received for 5
I0707 12:27:42.039249       6 log.go:172] (0xc000947ef0) Data frame received for 3
I0707 12:27:42.039277       6 log.go:172] (0xc0026b08c0) (3) Data frame handling
I0707 12:27:42.039295       6 log.go:172] (0xc0026b08c0) (3) Data frame sent
I0707 12:27:42.039979       6 log.go:172] (0xc000947ef0) Data frame received for 3
I0707 12:27:42.040011       6 log.go:172] (0xc0026b08c0) (3) Data frame handling
I0707 12:27:42.040044       6 log.go:172] (0xc000947ef0) Data frame received for 5
I0707 12:27:42.040061       6 log.go:172] (0xc00149bc20) (5) Data frame handling
I0707 12:27:42.041855       6 log.go:172] (0xc000947ef0) Data frame received for 1
I0707 12:27:42.041873       6 log.go:172] (0xc000de1720) (1) Data frame handling
I0707 12:27:42.041891       6 log.go:172] (0xc000de1720) (1) Data frame sent
I0707 12:27:42.041916       6 log.go:172] (0xc000947ef0) (0xc000de1720) Stream removed, broadcasting: 1
I0707 12:27:42.042034       6 log.go:172] (0xc000947ef0) Go away received
I0707 12:27:42.042084       6 log.go:172] (0xc000947ef0) (0xc000de1720) Stream removed, broadcasting: 1
I0707 12:27:42.042115       6 log.go:172] (0xc000947ef0) (0xc0026b08c0) Stream removed, broadcasting: 3
I0707 12:27:42.042125       6 log.go:172] (0xc000947ef0) (0xc00149bc20) Stream removed, broadcasting: 5
Jul  7 12:27:42.042: INFO: Waiting for endpoints: map[]
Jul  7 12:27:42.044: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.244:8080/dial?request=hostName&protocol=udp&host=10.244.1.120&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-lsh4w PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 12:27:42.045: INFO: >>> kubeConfig: /root/.kube/config
I0707 12:27:42.074729       6 log.go:172] (0xc000a27a20) (0xc0026b0b40) Create stream
I0707 12:27:42.074767       6 log.go:172] (0xc000a27a20) (0xc0026b0b40) Stream added, broadcasting: 1
I0707 12:27:42.077409       6 log.go:172] (0xc000a27a20) Reply frame received for 1
I0707 12:27:42.077447       6 log.go:172] (0xc000a27a20) (0xc0026b0be0) Create stream
I0707 12:27:42.077457       6 log.go:172] (0xc000a27a20) (0xc0026b0be0) Stream added, broadcasting: 3
I0707 12:27:42.078382       6 log.go:172] (0xc000a27a20) Reply frame received for 3
I0707 12:27:42.078427       6 log.go:172] (0xc000a27a20) (0xc0026b0c80) Create stream
I0707 12:27:42.078441       6 log.go:172] (0xc000a27a20) (0xc0026b0c80) Stream added, broadcasting: 5
I0707 12:27:42.079345       6 log.go:172] (0xc000a27a20) Reply frame received for 5
I0707 12:27:42.144321       6 log.go:172] (0xc000a27a20) Data frame received for 3
I0707 12:27:42.144355       6 log.go:172] (0xc0026b0be0) (3) Data frame handling
I0707 12:27:42.144376       6 log.go:172] (0xc0026b0be0) (3) Data frame sent
I0707 12:27:42.145077       6 log.go:172] (0xc000a27a20) Data frame received for 3
I0707 12:27:42.145315       6 log.go:172] (0xc0026b0be0) (3) Data frame handling
I0707 12:27:42.145746       6 log.go:172] (0xc000a27a20) Data frame received for 5
I0707 12:27:42.145773       6 log.go:172] (0xc0026b0c80) (5) Data frame handling
I0707 12:27:42.146758       6 log.go:172] (0xc000a27a20) Data frame received for 1
I0707 12:27:42.146778       6 log.go:172] (0xc0026b0b40) (1) Data frame handling
I0707 12:27:42.146793       6 log.go:172] (0xc0026b0b40) (1) Data frame sent
I0707 12:27:42.146813       6 log.go:172] (0xc000a27a20) (0xc0026b0b40) Stream removed, broadcasting: 1
I0707 12:27:42.146858       6 log.go:172] (0xc000a27a20) Go away received
I0707 12:27:42.146972       6 log.go:172] (0xc000a27a20) (0xc0026b0b40) Stream removed, broadcasting: 1
I0707 12:27:42.147008       6 log.go:172] (0xc000a27a20) (0xc0026b0be0) Stream removed, broadcasting: 3
I0707 12:27:42.147023       6 log.go:172] (0xc000a27a20) (0xc0026b0c80) Stream removed, broadcasting: 5
Jul  7 12:27:42.147: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:27:42.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-lsh4w" for this suite.
Jul  7 12:28:04.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:28:04.239: INFO: namespace: e2e-tests-pod-network-test-lsh4w, resource: bindings, ignored listing per whitelist
Jul  7 12:28:04.241: INFO: namespace e2e-tests-pod-network-test-lsh4w deletion completed in 22.090102307s

• [SLOW TEST:48.571 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:28:04.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-zz5x
STEP: Creating a pod to test atomic-volume-subpath
Jul  7 12:28:04.399: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zz5x" in namespace "e2e-tests-subpath-7rs7l" to be "success or failure"
Jul  7 12:28:04.463: INFO: Pod "pod-subpath-test-configmap-zz5x": Phase="Pending", Reason="", readiness=false. Elapsed: 64.226828ms
Jul  7 12:28:06.467: INFO: Pod "pod-subpath-test-configmap-zz5x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068737753s
Jul  7 12:28:08.576: INFO: Pod "pod-subpath-test-configmap-zz5x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177688149s
Jul  7 12:28:10.580: INFO: Pod "pod-subpath-test-configmap-zz5x": Phase="Pending", Reason="", readiness=false. Elapsed: 6.181767831s
Jul  7 12:28:12.585: INFO: Pod "pod-subpath-test-configmap-zz5x": Phase="Pending", Reason="", readiness=false. Elapsed: 8.186553791s
Jul  7 12:28:14.589: INFO: Pod "pod-subpath-test-configmap-zz5x": Phase="Running", Reason="", readiness=false. Elapsed: 10.190314794s
Jul  7 12:28:16.592: INFO: Pod "pod-subpath-test-configmap-zz5x": Phase="Running", Reason="", readiness=false. Elapsed: 12.193567777s
Jul  7 12:28:18.596: INFO: Pod "pod-subpath-test-configmap-zz5x": Phase="Running", Reason="", readiness=false. Elapsed: 14.197791779s
Jul  7 12:28:20.600: INFO: Pod "pod-subpath-test-configmap-zz5x": Phase="Running", Reason="", readiness=false. Elapsed: 16.201312906s
Jul  7 12:28:22.607: INFO: Pod "pod-subpath-test-configmap-zz5x": Phase="Running", Reason="", readiness=false. Elapsed: 18.208083843s
Jul  7 12:28:24.611: INFO: Pod "pod-subpath-test-configmap-zz5x": Phase="Running", Reason="", readiness=false. Elapsed: 20.212539276s
Jul  7 12:28:26.691: INFO: Pod "pod-subpath-test-configmap-zz5x": Phase="Running", Reason="", readiness=false. Elapsed: 22.292241834s
Jul  7 12:28:28.695: INFO: Pod "pod-subpath-test-configmap-zz5x": Phase="Running", Reason="", readiness=false. Elapsed: 24.296337152s
Jul  7 12:28:30.810: INFO: Pod "pod-subpath-test-configmap-zz5x": Phase="Running", Reason="", readiness=false. Elapsed: 26.411581251s
Jul  7 12:28:32.814: INFO: Pod "pod-subpath-test-configmap-zz5x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.415474133s
STEP: Saw pod success
Jul  7 12:28:32.814: INFO: Pod "pod-subpath-test-configmap-zz5x" satisfied condition "success or failure"
Jul  7 12:28:32.817: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-zz5x container test-container-subpath-configmap-zz5x: 
STEP: delete the pod
Jul  7 12:28:32.944: INFO: Waiting for pod pod-subpath-test-configmap-zz5x to disappear
Jul  7 12:28:32.970: INFO: Pod pod-subpath-test-configmap-zz5x no longer exists
STEP: Deleting pod pod-subpath-test-configmap-zz5x
Jul  7 12:28:32.970: INFO: Deleting pod "pod-subpath-test-configmap-zz5x" in namespace "e2e-tests-subpath-7rs7l"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:28:32.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-7rs7l" for this suite.
Jul  7 12:28:41.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:28:41.072: INFO: namespace: e2e-tests-subpath-7rs7l, resource: bindings, ignored listing per whitelist
Jul  7 12:28:41.086: INFO: namespace e2e-tests-subpath-7rs7l deletion completed in 8.111194536s

• [SLOW TEST:36.844 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:28:41.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jul  7 12:28:41.253: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:28:41.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-d6sjw" for this suite.
Jul  7 12:28:47.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:28:47.448: INFO: namespace: e2e-tests-kubectl-d6sjw, resource: bindings, ignored listing per whitelist
Jul  7 12:28:47.475: INFO: namespace e2e-tests-kubectl-d6sjw deletion completed in 6.10082818s

• [SLOW TEST:6.389 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:28:47.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-67e68ad3-c04d-11ea-9ad7-0242ac11001b
STEP: Creating a pod to test consume secrets
Jul  7 12:28:47.673: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-67eaa104-c04d-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-q2qd6" to be "success or failure"
Jul  7 12:28:47.677: INFO: Pod "pod-projected-secrets-67eaa104-c04d-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336354ms
Jul  7 12:28:49.681: INFO: Pod "pod-projected-secrets-67eaa104-c04d-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007537663s
Jul  7 12:28:51.686: INFO: Pod "pod-projected-secrets-67eaa104-c04d-11ea-9ad7-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.012609055s
Jul  7 12:28:53.689: INFO: Pod "pod-projected-secrets-67eaa104-c04d-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01649576s
STEP: Saw pod success
Jul  7 12:28:53.690: INFO: Pod "pod-projected-secrets-67eaa104-c04d-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 12:28:53.692: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-67eaa104-c04d-11ea-9ad7-0242ac11001b container projected-secret-volume-test: 
STEP: delete the pod
Jul  7 12:28:53.753: INFO: Waiting for pod pod-projected-secrets-67eaa104-c04d-11ea-9ad7-0242ac11001b to disappear
Jul  7 12:28:53.906: INFO: Pod pod-projected-secrets-67eaa104-c04d-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:28:53.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-q2qd6" for this suite.
Jul  7 12:29:02.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:29:02.097: INFO: namespace: e2e-tests-projected-q2qd6, resource: bindings, ignored listing per whitelist
Jul  7 12:29:02.131: INFO: namespace e2e-tests-projected-q2qd6 deletion completed in 8.162306131s

• [SLOW TEST:14.655 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:29:02.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jul  7 12:29:19.989: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:29:21.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-fdzmn" for this suite.
Jul  7 12:29:43.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:29:43.294: INFO: namespace: e2e-tests-replicaset-fdzmn, resource: bindings, ignored listing per whitelist
Jul  7 12:29:43.314: INFO: namespace e2e-tests-replicaset-fdzmn deletion completed in 22.13072823s

• [SLOW TEST:41.183 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:29:43.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  7 12:29:43.459: INFO: Waiting up to 5m0s for pod "downwardapi-volume-892918f6-c04d-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-wcwnx" to be "success or failure"
Jul  7 12:29:43.475: INFO: Pod "downwardapi-volume-892918f6-c04d-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.898513ms
Jul  7 12:29:45.493: INFO: Pod "downwardapi-volume-892918f6-c04d-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033702093s
Jul  7 12:29:47.497: INFO: Pod "downwardapi-volume-892918f6-c04d-11ea-9ad7-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.037551122s
Jul  7 12:29:49.500: INFO: Pod "downwardapi-volume-892918f6-c04d-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040741346s
STEP: Saw pod success
Jul  7 12:29:49.500: INFO: Pod "downwardapi-volume-892918f6-c04d-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 12:29:49.503: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-892918f6-c04d-11ea-9ad7-0242ac11001b container client-container: 
STEP: delete the pod
Jul  7 12:29:49.589: INFO: Waiting for pod downwardapi-volume-892918f6-c04d-11ea-9ad7-0242ac11001b to disappear
Jul  7 12:29:49.618: INFO: Pod downwardapi-volume-892918f6-c04d-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:29:49.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wcwnx" for this suite.
Jul  7 12:29:55.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:29:55.706: INFO: namespace: e2e-tests-projected-wcwnx, resource: bindings, ignored listing per whitelist
Jul  7 12:29:55.708: INFO: namespace e2e-tests-projected-wcwnx deletion completed in 6.085795251s

• [SLOW TEST:12.393 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:29:55.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  7 12:29:55.849: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90889a5f-c04d-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-zkxht" to be "success or failure"
Jul  7 12:29:55.853: INFO: Pod "downwardapi-volume-90889a5f-c04d-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.649438ms
Jul  7 12:29:57.936: INFO: Pod "downwardapi-volume-90889a5f-c04d-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087452987s
Jul  7 12:29:59.941: INFO: Pod "downwardapi-volume-90889a5f-c04d-11ea-9ad7-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.091901037s
Jul  7 12:30:01.944: INFO: Pod "downwardapi-volume-90889a5f-c04d-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.095322178s
STEP: Saw pod success
Jul  7 12:30:01.944: INFO: Pod "downwardapi-volume-90889a5f-c04d-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 12:30:01.947: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-90889a5f-c04d-11ea-9ad7-0242ac11001b container client-container: 
STEP: delete the pod
Jul  7 12:30:01.968: INFO: Waiting for pod downwardapi-volume-90889a5f-c04d-11ea-9ad7-0242ac11001b to disappear
Jul  7 12:30:01.991: INFO: Pod downwardapi-volume-90889a5f-c04d-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:30:01.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zkxht" for this suite.
Jul  7 12:30:08.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:30:08.171: INFO: namespace: e2e-tests-projected-zkxht, resource: bindings, ignored listing per whitelist
Jul  7 12:30:08.176: INFO: namespace e2e-tests-projected-zkxht deletion completed in 6.180954471s

• [SLOW TEST:12.468 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:30:08.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-ltzsb
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Jul  7 12:30:08.385: INFO: Found 0 stateful pods, waiting for 3
Jul  7 12:30:18.390: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 12:30:18.390: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 12:30:18.390: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul  7 12:30:28.390: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 12:30:28.390: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 12:30:28.390: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jul  7 12:30:28.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ltzsb ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  7 12:30:28.755: INFO: stderr: "I0707 12:30:28.535615    3513 log.go:172] (0xc000138840) (0xc000754640) Create stream\nI0707 12:30:28.535680    3513 log.go:172] (0xc000138840) (0xc000754640) Stream added, broadcasting: 1\nI0707 12:30:28.538734    3513 log.go:172] (0xc000138840) Reply frame received for 1\nI0707 12:30:28.538767    3513 log.go:172] (0xc000138840) (0xc0006b0e60) Create stream\nI0707 12:30:28.538776    3513 log.go:172] (0xc000138840) (0xc0006b0e60) Stream added, broadcasting: 3\nI0707 12:30:28.540002    3513 log.go:172] (0xc000138840) Reply frame received for 3\nI0707 12:30:28.540066    3513 log.go:172] (0xc000138840) (0xc00064a000) Create stream\nI0707 12:30:28.540085    3513 log.go:172] (0xc000138840) (0xc00064a000) Stream added, broadcasting: 5\nI0707 12:30:28.540982    3513 log.go:172] (0xc000138840) Reply frame received for 5\nI0707 12:30:28.747795    3513 log.go:172] (0xc000138840) Data frame received for 3\nI0707 12:30:28.747824    3513 log.go:172] (0xc0006b0e60) (3) Data frame handling\nI0707 12:30:28.747836    3513 log.go:172] (0xc0006b0e60) (3) Data frame sent\nI0707 12:30:28.748088    3513 log.go:172] (0xc000138840) Data frame received for 5\nI0707 12:30:28.748115    3513 log.go:172] (0xc000138840) Data frame received for 3\nI0707 12:30:28.748226    3513 log.go:172] (0xc00064a000) (5) Data frame handling\nI0707 12:30:28.748250    3513 log.go:172] (0xc0006b0e60) (3) Data frame handling\nI0707 12:30:28.750606    3513 log.go:172] (0xc000138840) Data frame received for 1\nI0707 12:30:28.750619    3513 log.go:172] (0xc000754640) (1) Data frame handling\nI0707 12:30:28.750625    3513 log.go:172] (0xc000754640) (1) Data frame sent\nI0707 12:30:28.750794    3513 log.go:172] (0xc000138840) (0xc000754640) Stream removed, broadcasting: 1\nI0707 12:30:28.750923    3513 log.go:172] (0xc000138840) (0xc000754640) Stream removed, broadcasting: 1\nI0707 12:30:28.750937    3513 log.go:172] (0xc000138840) (0xc0006b0e60) Stream removed, broadcasting: 3\nI0707 12:30:28.751061    3513 log.go:172] (0xc000138840) (0xc00064a000) Stream removed, broadcasting: 5\nI0707 12:30:28.751285    3513 log.go:172] (0xc000138840) Go away received\n"
Jul  7 12:30:28.755: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  7 12:30:28.755: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jul  7 12:30:38.864: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jul  7 12:30:48.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ltzsb ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 12:30:49.089: INFO: stderr: "I0707 12:30:49.009990    3535 log.go:172] (0xc000138580) (0xc000573360) Create stream\nI0707 12:30:49.010055    3535 log.go:172] (0xc000138580) (0xc000573360) Stream added, broadcasting: 1\nI0707 12:30:49.012813    3535 log.go:172] (0xc000138580) Reply frame received for 1\nI0707 12:30:49.012861    3535 log.go:172] (0xc000138580) (0xc0006a4000) Create stream\nI0707 12:30:49.012876    3535 log.go:172] (0xc000138580) (0xc0006a4000) Stream added, broadcasting: 3\nI0707 12:30:49.014233    3535 log.go:172] (0xc000138580) Reply frame received for 3\nI0707 12:30:49.014304    3535 log.go:172] (0xc000138580) (0xc0006a8000) Create stream\nI0707 12:30:49.014331    3535 log.go:172] (0xc000138580) (0xc0006a8000) Stream added, broadcasting: 5\nI0707 12:30:49.015354    3535 log.go:172] (0xc000138580) Reply frame received for 5\nI0707 12:30:49.083335    3535 log.go:172] (0xc000138580) Data frame received for 5\nI0707 12:30:49.083399    3535 log.go:172] (0xc0006a8000) (5) Data frame handling\nI0707 12:30:49.083442    3535 log.go:172] (0xc000138580) Data frame received for 3\nI0707 12:30:49.083462    3535 log.go:172] (0xc0006a4000) (3) Data frame handling\nI0707 12:30:49.083492    3535 log.go:172] (0xc0006a4000) (3) Data frame sent\nI0707 12:30:49.083517    3535 log.go:172] (0xc000138580) Data frame received for 3\nI0707 12:30:49.083535    3535 log.go:172] (0xc0006a4000) (3) Data frame handling\nI0707 12:30:49.085042    3535 log.go:172] (0xc000138580) Data frame received for 1\nI0707 12:30:49.085060    3535 log.go:172] (0xc000573360) (1) Data frame handling\nI0707 12:30:49.085067    3535 log.go:172] (0xc000573360) (1) Data frame sent\nI0707 12:30:49.085075    3535 log.go:172] (0xc000138580) (0xc000573360) Stream removed, broadcasting: 1\nI0707 12:30:49.085334    3535 log.go:172] (0xc000138580) Go away received\nI0707 12:30:49.085420    3535 log.go:172] (0xc000138580) (0xc000573360) Stream removed, broadcasting: 1\nI0707 12:30:49.085483    3535 log.go:172] (0xc000138580) (0xc0006a4000) Stream removed, broadcasting: 3\nI0707 12:30:49.085499    3535 log.go:172] (0xc000138580) (0xc0006a8000) Stream removed, broadcasting: 5\n"
Jul  7 12:30:49.090: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  7 12:30:49.090: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  7 12:30:59.110: INFO: Waiting for StatefulSet e2e-tests-statefulset-ltzsb/ss2 to complete update
Jul  7 12:30:59.110: INFO: Waiting for Pod e2e-tests-statefulset-ltzsb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul  7 12:30:59.110: INFO: Waiting for Pod e2e-tests-statefulset-ltzsb/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul  7 12:30:59.110: INFO: Waiting for Pod e2e-tests-statefulset-ltzsb/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul  7 12:31:09.168: INFO: Waiting for StatefulSet e2e-tests-statefulset-ltzsb/ss2 to complete update
Jul  7 12:31:09.168: INFO: Waiting for Pod e2e-tests-statefulset-ltzsb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul  7 12:31:09.168: INFO: Waiting for Pod e2e-tests-statefulset-ltzsb/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul  7 12:31:19.119: INFO: Waiting for StatefulSet e2e-tests-statefulset-ltzsb/ss2 to complete update
Jul  7 12:31:19.119: INFO: Waiting for Pod e2e-tests-statefulset-ltzsb/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Jul  7 12:31:29.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ltzsb ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  7 12:31:29.368: INFO: stderr: "I0707 12:31:29.239207    3558 log.go:172] (0xc000278420) (0xc000728640) Create stream\nI0707 12:31:29.239253    3558 log.go:172] (0xc000278420) (0xc000728640) Stream added, broadcasting: 1\nI0707 12:31:29.241450    3558 log.go:172] (0xc000278420) Reply frame received for 1\nI0707 12:31:29.241528    3558 log.go:172] (0xc000278420) (0xc000588e60) Create stream\nI0707 12:31:29.241557    3558 log.go:172] (0xc000278420) (0xc000588e60) Stream added, broadcasting: 3\nI0707 12:31:29.242750    3558 log.go:172] (0xc000278420) Reply frame received for 3\nI0707 12:31:29.242789    3558 log.go:172] (0xc000278420) (0xc00030e000) Create stream\nI0707 12:31:29.242801    3558 log.go:172] (0xc000278420) (0xc00030e000) Stream added, broadcasting: 5\nI0707 12:31:29.243881    3558 log.go:172] (0xc000278420) Reply frame received for 5\nI0707 12:31:29.362057    3558 log.go:172] (0xc000278420) Data frame received for 3\nI0707 12:31:29.362089    3558 log.go:172] (0xc000588e60) (3) Data frame handling\nI0707 12:31:29.362116    3558 log.go:172] (0xc000588e60) (3) Data frame sent\nI0707 12:31:29.362530    3558 log.go:172] (0xc000278420) Data frame received for 3\nI0707 12:31:29.362552    3558 log.go:172] (0xc000588e60) (3) Data frame handling\nI0707 12:31:29.362684    3558 log.go:172] (0xc000278420) Data frame received for 5\nI0707 12:31:29.362718    3558 log.go:172] (0xc00030e000) (5) Data frame handling\nI0707 12:31:29.364209    3558 log.go:172] (0xc000278420) Data frame received for 1\nI0707 12:31:29.364237    3558 log.go:172] (0xc000728640) (1) Data frame handling\nI0707 12:31:29.364252    3558 log.go:172] (0xc000728640) (1) Data frame sent\nI0707 12:31:29.364271    3558 log.go:172] (0xc000278420) (0xc000728640) Stream removed, broadcasting: 1\nI0707 12:31:29.364293    3558 log.go:172] (0xc000278420) Go away received\nI0707 12:31:29.364472    3558 log.go:172] (0xc000278420) (0xc000728640) Stream removed, broadcasting: 1\nI0707 12:31:29.364500    3558 log.go:172] (0xc000278420) (0xc000588e60) Stream removed, broadcasting: 3\nI0707 12:31:29.364514    3558 log.go:172] (0xc000278420) (0xc00030e000) Stream removed, broadcasting: 5\n"
Jul  7 12:31:29.368: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  7 12:31:29.368: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  7 12:31:39.399: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jul  7 12:31:49.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-ltzsb ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  7 12:31:49.675: INFO: stderr: "I0707 12:31:49.603961    3581 log.go:172] (0xc0001380b0) (0xc000312d20) Create stream\nI0707 12:31:49.604069    3581 log.go:172] (0xc0001380b0) (0xc000312d20) Stream added, broadcasting: 1\nI0707 12:31:49.607320    3581 log.go:172] (0xc0001380b0) Reply frame received for 1\nI0707 12:31:49.607382    3581 log.go:172] (0xc0001380b0) (0xc000312dc0) Create stream\nI0707 12:31:49.607401    3581 log.go:172] (0xc0001380b0) (0xc000312dc0) Stream added, broadcasting: 3\nI0707 12:31:49.608375    3581 log.go:172] (0xc0001380b0) Reply frame received for 3\nI0707 12:31:49.608433    3581 log.go:172] (0xc0001380b0) (0xc00060e000) Create stream\nI0707 12:31:49.608454    3581 log.go:172] (0xc0001380b0) (0xc00060e000) Stream added, broadcasting: 5\nI0707 12:31:49.609569    3581 log.go:172] (0xc0001380b0) Reply frame received for 5\nI0707 12:31:49.668769    3581 log.go:172] (0xc0001380b0) Data frame received for 3\nI0707 12:31:49.668810    3581 log.go:172] (0xc000312dc0) (3) Data frame handling\nI0707 12:31:49.668841    3581 log.go:172] (0xc000312dc0) (3) Data frame sent\nI0707 12:31:49.668866    3581 log.go:172] (0xc0001380b0) Data frame received for 3\nI0707 12:31:49.668878    3581 log.go:172] (0xc000312dc0) (3) Data frame handling\nI0707 12:31:49.668966    3581 log.go:172] (0xc0001380b0) Data frame received for 5\nI0707 12:31:49.668995    3581 log.go:172] (0xc00060e000) (5) Data frame handling\nI0707 12:31:49.670716    3581 log.go:172] (0xc0001380b0) Data frame received for 1\nI0707 12:31:49.670738    3581 log.go:172] (0xc000312d20) (1) Data frame handling\nI0707 12:31:49.670758    3581 log.go:172] (0xc000312d20) (1) Data frame sent\nI0707 12:31:49.670786    3581 log.go:172] (0xc0001380b0) (0xc000312d20) Stream removed, broadcasting: 1\nI0707 12:31:49.670887    3581 log.go:172] (0xc0001380b0) Go away received\nI0707 12:31:49.670996    3581 log.go:172] (0xc0001380b0) (0xc000312d20) Stream removed, broadcasting: 1\nI0707 12:31:49.671025    3581 log.go:172] (0xc0001380b0) (0xc000312dc0) Stream removed, broadcasting: 3\nI0707 12:31:49.671050    3581 log.go:172] (0xc0001380b0) (0xc00060e000) Stream removed, broadcasting: 5\n"
Jul  7 12:31:49.675: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  7 12:31:49.675: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  7 12:31:59.698: INFO: Waiting for StatefulSet e2e-tests-statefulset-ltzsb/ss2 to complete update
Jul  7 12:31:59.698: INFO: Waiting for Pod e2e-tests-statefulset-ltzsb/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jul  7 12:31:59.698: INFO: Waiting for Pod e2e-tests-statefulset-ltzsb/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jul  7 12:31:59.698: INFO: Waiting for Pod e2e-tests-statefulset-ltzsb/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jul  7 12:32:09.763: INFO: Waiting for StatefulSet e2e-tests-statefulset-ltzsb/ss2 to complete update
Jul  7 12:32:09.763: INFO: Waiting for Pod e2e-tests-statefulset-ltzsb/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jul  7 12:32:09.763: INFO: Waiting for Pod e2e-tests-statefulset-ltzsb/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jul  7 12:32:19.709: INFO: Waiting for StatefulSet e2e-tests-statefulset-ltzsb/ss2 to complete update
Jul  7 12:32:19.709: INFO: Waiting for Pod e2e-tests-statefulset-ltzsb/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul  7 12:32:29.725: INFO: Deleting all statefulset in ns e2e-tests-statefulset-ltzsb
Jul  7 12:32:29.758: INFO: Scaling statefulset ss2 to 0
Jul  7 12:32:59.796: INFO: Waiting for statefulset status.replicas updated to 0
Jul  7 12:32:59.799: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:32:59.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-ltzsb" for this suite.
Jul  7 12:33:07.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:33:08.017: INFO: namespace: e2e-tests-statefulset-ltzsb, resource: bindings, ignored listing per whitelist
Jul  7 12:33:08.021: INFO: namespace e2e-tests-statefulset-ltzsb deletion completed in 8.136483218s

• [SLOW TEST:179.845 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:33:08.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-d4h6c
Jul  7 12:33:12.203: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-d4h6c
STEP: checking the pod's current state and verifying that restartCount is present
Jul  7 12:33:12.206: INFO: Initial restart count of pod liveness-exec is 0
Jul  7 12:34:02.309: INFO: Restart count of pod e2e-tests-container-probe-d4h6c/liveness-exec is now 1 (50.102381703s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:34:02.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-d4h6c" for this suite.
Jul  7 12:34:08.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:34:08.415: INFO: namespace: e2e-tests-container-probe-d4h6c, resource: bindings, ignored listing per whitelist
Jul  7 12:34:08.460: INFO: namespace e2e-tests-container-probe-d4h6c deletion completed in 6.077563775s

• [SLOW TEST:60.439 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:34:08.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jul  7 12:34:08.638: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-b54sr,SelfLink:/api/v1/namespaces/e2e-tests-watch-b54sr/configmaps/e2e-watch-test-label-changed,UID:2729cbf5-c04e-11ea-a300-0242ac110004,ResourceVersion:610250,Generation:0,CreationTimestamp:2020-07-07 12:34:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul  7 12:34:08.638: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-b54sr,SelfLink:/api/v1/namespaces/e2e-tests-watch-b54sr/configmaps/e2e-watch-test-label-changed,UID:2729cbf5-c04e-11ea-a300-0242ac110004,ResourceVersion:610252,Generation:0,CreationTimestamp:2020-07-07 12:34:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jul  7 12:34:08.638: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-b54sr,SelfLink:/api/v1/namespaces/e2e-tests-watch-b54sr/configmaps/e2e-watch-test-label-changed,UID:2729cbf5-c04e-11ea-a300-0242ac110004,ResourceVersion:610253,Generation:0,CreationTimestamp:2020-07-07 12:34:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jul  7 12:34:18.703: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-b54sr,SelfLink:/api/v1/namespaces/e2e-tests-watch-b54sr/configmaps/e2e-watch-test-label-changed,UID:2729cbf5-c04e-11ea-a300-0242ac110004,ResourceVersion:610291,Generation:0,CreationTimestamp:2020-07-07 12:34:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul  7 12:34:18.703: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-b54sr,SelfLink:/api/v1/namespaces/e2e-tests-watch-b54sr/configmaps/e2e-watch-test-label-changed,UID:2729cbf5-c04e-11ea-a300-0242ac110004,ResourceVersion:610295,Generation:0,CreationTimestamp:2020-07-07 12:34:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jul  7 12:34:18.703: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-b54sr,SelfLink:/api/v1/namespaces/e2e-tests-watch-b54sr/configmaps/e2e-watch-test-label-changed,UID:2729cbf5-c04e-11ea-a300-0242ac110004,ResourceVersion:610296,Generation:0,CreationTimestamp:2020-07-07 12:34:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:34:18.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-b54sr" for this suite.
Jul  7 12:34:24.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:34:24.783: INFO: namespace: e2e-tests-watch-b54sr, resource: bindings, ignored listing per whitelist
Jul  7 12:34:24.880: INFO: namespace e2e-tests-watch-b54sr deletion completed in 6.172101686s

• [SLOW TEST:16.420 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:34:24.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-pqlkh A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-pqlkh;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-pqlkh A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-pqlkh;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-pqlkh.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-pqlkh.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-pqlkh.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-pqlkh.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-pqlkh.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-pqlkh.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-pqlkh.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-pqlkh.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-pqlkh.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 232.248.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.248.232_udp@PTR;check="$$(dig +tcp +noall +answer +search 232.248.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.248.232_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-pqlkh A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-pqlkh;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-pqlkh A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-pqlkh.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-pqlkh.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-pqlkh.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-pqlkh.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-pqlkh.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-pqlkh.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-pqlkh.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-pqlkh.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 232.248.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.248.232_udp@PTR;check="$$(dig +tcp +noall +answer +search 232.248.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.248.232_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  7 12:34:33.164: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:33.183: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:33.203: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:33.205: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:33.208: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pqlkh from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:33.211: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:33.215: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:33.218: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:33.221: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:33.224: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:33.240: INFO: Lookups using e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-pqlkh jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh jessie_udp@dns-test-service.e2e-tests-dns-pqlkh.svc jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc]

Jul  7 12:34:38.245: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:38.263: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:38.287: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:38.289: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:38.291: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pqlkh from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:38.294: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:38.297: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:38.300: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:38.303: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:38.306: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:38.325: INFO: Lookups using e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-pqlkh jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh jessie_udp@dns-test-service.e2e-tests-dns-pqlkh.svc jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc]

Jul  7 12:34:43.244: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:43.296: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:43.344: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:43.346: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:43.348: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pqlkh from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:43.350: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:43.352: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:43.355: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:43.357: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:43.359: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:43.373: INFO: Lookups using e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-pqlkh jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh jessie_udp@dns-test-service.e2e-tests-dns-pqlkh.svc jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc]

Jul  7 12:34:48.245: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:48.262: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:48.298: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:48.301: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:48.304: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pqlkh from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:48.307: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:48.310: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:48.312: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:48.315: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:48.319: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:48.337: INFO: Lookups using e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-pqlkh jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh jessie_udp@dns-test-service.e2e-tests-dns-pqlkh.svc jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc]

Jul  7 12:34:53.244: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:53.258: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:53.289: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:53.291: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:53.293: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pqlkh from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:53.296: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:53.298: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:53.301: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:53.303: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:53.305: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:53.346: INFO: Lookups using e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-pqlkh jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh jessie_udp@dns-test-service.e2e-tests-dns-pqlkh.svc jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc]

Jul  7 12:34:58.245: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:58.266: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:58.293: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:58.296: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:58.299: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pqlkh from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:58.303: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:58.306: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:58.309: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:58.313: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:58.316: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc from pod e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b: the server could not find the requested resource (get pods dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b)
Jul  7 12:34:58.335: INFO: Lookups using e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b failed for: [wheezy_udp@dns-test-service wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-pqlkh jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh jessie_udp@dns-test-service.e2e-tests-dns-pqlkh.svc jessie_tcp@dns-test-service.e2e-tests-dns-pqlkh.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pqlkh.svc]

Jul  7 12:35:03.350: INFO: DNS probes using e2e-tests-dns-pqlkh/dns-test-31058ae0-c04e-11ea-9ad7-0242ac11001b succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:35:04.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-pqlkh" for this suite.
Jul  7 12:35:10.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:35:10.646: INFO: namespace: e2e-tests-dns-pqlkh, resource: bindings, ignored listing per whitelist
Jul  7 12:35:10.654: INFO: namespace e2e-tests-dns-pqlkh deletion completed in 6.071858574s

• [SLOW TEST:45.774 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:35:10.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  7 12:35:10.741: INFO: Creating deployment "test-recreate-deployment"
Jul  7 12:35:10.754: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jul  7 12:35:10.761: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jul  7 12:35:12.769: INFO: Waiting deployment "test-recreate-deployment" to complete
Jul  7 12:35:12.772: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722110, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722110, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722110, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722110, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 12:35:14.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722110, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722110, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722110, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722110, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 12:35:16.778: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jul  7 12:35:16.783: INFO: Updating deployment test-recreate-deployment
Jul  7 12:35:16.784: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul  7 12:35:17.654: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-252j2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-252j2/deployments/test-recreate-deployment,UID:4c3f56a6-c04e-11ea-a300-0242ac110004,ResourceVersion:610512,Generation:2,CreationTimestamp:2020-07-07 12:35:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-07-07 12:35:16 +0000 UTC 2020-07-07 12:35:16 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-07-07 12:35:17 +0000 UTC 2020-07-07 12:35:10 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jul  7 12:35:17.704: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-252j2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-252j2/replicasets/test-recreate-deployment-589c4bfd,UID:4fec3a0f-c04e-11ea-a300-0242ac110004,ResourceVersion:610509,Generation:1,CreationTimestamp:2020-07-07 12:35:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 4c3f56a6-c04e-11ea-a300-0242ac110004 0xc000e7ba9f 0xc000e7bab0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  7 12:35:17.704: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jul  7 12:35:17.704: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-252j2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-252j2/replicasets/test-recreate-deployment-5bf7f65dc,UID:4c4257cd-c04e-11ea-a300-0242ac110004,ResourceVersion:610501,Generation:2,CreationTimestamp:2020-07-07 12:35:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 4c3f56a6-c04e-11ea-a300-0242ac110004 0xc000e7bb80 0xc000e7bb81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  7 12:35:17.706: INFO: Pod "test-recreate-deployment-589c4bfd-7lkqw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-7lkqw,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-252j2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-252j2/pods/test-recreate-deployment-589c4bfd-7lkqw,UID:4fedfccd-c04e-11ea-a300-0242ac110004,ResourceVersion:610513,Generation:0,CreationTimestamp:2020-07-07 12:35:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 4fec3a0f-c04e-11ea-a300-0242ac110004 0xc00193d91f 0xc00193d9a0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-dgxpc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dgxpc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-dgxpc true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00193daf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00193db10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:35:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:35:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:35:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:35:16 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:,StartTime:2020-07-07 12:35:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:35:17.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-252j2" for this suite.
Jul  7 12:35:23.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:35:23.795: INFO: namespace: e2e-tests-deployment-252j2, resource: bindings, ignored listing per whitelist
Jul  7 12:35:23.955: INFO: namespace e2e-tests-deployment-252j2 deletion completed in 6.246388246s

• [SLOW TEST:13.301 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:35:23.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-8xp9c
I0707 12:35:24.161376       6 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-8xp9c, replica count: 1
I0707 12:35:25.211769       6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 12:35:26.212341       6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 12:35:27.212527       6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 12:35:28.212755       6 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  7 12:35:28.367: INFO: Created: latency-svc-66sz9
Jul  7 12:35:28.392: INFO: Got endpoints: latency-svc-66sz9 [79.747922ms]
Jul  7 12:35:28.435: INFO: Created: latency-svc-4nfm4
Jul  7 12:35:28.526: INFO: Got endpoints: latency-svc-4nfm4 [133.783675ms]
Jul  7 12:35:28.556: INFO: Created: latency-svc-kdjqn
Jul  7 12:35:28.584: INFO: Got endpoints: latency-svc-kdjqn [191.599515ms]
Jul  7 12:35:28.624: INFO: Created: latency-svc-h5t8f
Jul  7 12:35:28.658: INFO: Got endpoints: latency-svc-h5t8f [265.334435ms]
Jul  7 12:35:28.686: INFO: Created: latency-svc-6s8w9
Jul  7 12:35:28.695: INFO: Got endpoints: latency-svc-6s8w9 [302.478932ms]
Jul  7 12:35:28.722: INFO: Created: latency-svc-dx987
Jul  7 12:35:28.738: INFO: Got endpoints: latency-svc-dx987 [345.673568ms]
Jul  7 12:35:28.801: INFO: Created: latency-svc-n6ncv
Jul  7 12:35:28.804: INFO: Got endpoints: latency-svc-n6ncv [411.833459ms]
Jul  7 12:35:28.867: INFO: Created: latency-svc-k6r9x
Jul  7 12:35:28.879: INFO: Got endpoints: latency-svc-k6r9x [486.560938ms]
Jul  7 12:35:28.951: INFO: Created: latency-svc-gxwgt
Jul  7 12:35:28.954: INFO: Got endpoints: latency-svc-gxwgt [560.939057ms]
Jul  7 12:35:28.990: INFO: Created: latency-svc-9ndc7
Jul  7 12:35:29.002: INFO: Got endpoints: latency-svc-9ndc7 [609.850355ms]
Jul  7 12:35:29.044: INFO: Created: latency-svc-qqmbk
Jul  7 12:35:29.098: INFO: Got endpoints: latency-svc-qqmbk [705.527316ms]
Jul  7 12:35:29.125: INFO: Created: latency-svc-g9f2m
Jul  7 12:35:29.178: INFO: Got endpoints: latency-svc-g9f2m [784.951717ms]
Jul  7 12:35:29.238: INFO: Created: latency-svc-kmw6m
Jul  7 12:35:29.242: INFO: Got endpoints: latency-svc-kmw6m [849.106616ms]
Jul  7 12:35:29.269: INFO: Created: latency-svc-bcbzv
Jul  7 12:35:29.288: INFO: Got endpoints: latency-svc-bcbzv [894.810356ms]
Jul  7 12:35:29.317: INFO: Created: latency-svc-t9v8h
Jul  7 12:35:29.329: INFO: Got endpoints: latency-svc-t9v8h [936.933604ms]
Jul  7 12:35:29.400: INFO: Created: latency-svc-mbcln
Jul  7 12:35:29.403: INFO: Got endpoints: latency-svc-mbcln [1.010154431s]
Jul  7 12:35:29.446: INFO: Created: latency-svc-r2jhg
Jul  7 12:35:29.468: INFO: Got endpoints: latency-svc-r2jhg [941.919431ms]
Jul  7 12:35:29.497: INFO: Created: latency-svc-4dbqv
Jul  7 12:35:29.592: INFO: Got endpoints: latency-svc-4dbqv [1.007544789s]
Jul  7 12:35:29.594: INFO: Created: latency-svc-fzllt
Jul  7 12:35:29.616: INFO: Got endpoints: latency-svc-fzllt [958.397115ms]
Jul  7 12:35:29.656: INFO: Created: latency-svc-85mwp
Jul  7 12:35:29.669: INFO: Got endpoints: latency-svc-85mwp [974.28976ms]
Jul  7 12:35:29.692: INFO: Created: latency-svc-wdphb
Jul  7 12:35:29.758: INFO: Got endpoints: latency-svc-wdphb [1.019303366s]
Jul  7 12:35:29.758: INFO: Created: latency-svc-ccpnk
Jul  7 12:35:29.814: INFO: Got endpoints: latency-svc-ccpnk [1.009979376s]
Jul  7 12:35:29.887: INFO: Created: latency-svc-hqn8h
Jul  7 12:35:29.895: INFO: Got endpoints: latency-svc-hqn8h [1.015647413s]
Jul  7 12:35:29.920: INFO: Created: latency-svc-r6l8v
Jul  7 12:35:29.937: INFO: Got endpoints: latency-svc-r6l8v [983.381434ms]
Jul  7 12:35:29.956: INFO: Created: latency-svc-92696
Jul  7 12:35:29.964: INFO: Got endpoints: latency-svc-92696 [961.909799ms]
Jul  7 12:35:30.030: INFO: Created: latency-svc-c7968
Jul  7 12:35:30.043: INFO: Got endpoints: latency-svc-c7968 [944.151488ms]
Jul  7 12:35:30.092: INFO: Created: latency-svc-khqkh
Jul  7 12:35:30.124: INFO: Got endpoints: latency-svc-khqkh [946.093505ms]
Jul  7 12:35:30.191: INFO: Created: latency-svc-zldjz
Jul  7 12:35:30.235: INFO: Got endpoints: latency-svc-zldjz [992.663643ms]
Jul  7 12:35:30.265: INFO: Created: latency-svc-n2plr
Jul  7 12:35:30.394: INFO: Got endpoints: latency-svc-n2plr [1.106494352s]
Jul  7 12:35:30.445: INFO: Created: latency-svc-wcrl5
Jul  7 12:35:30.484: INFO: Got endpoints: latency-svc-wcrl5 [1.154269618s]
Jul  7 12:35:30.635: INFO: Created: latency-svc-q8ml2
Jul  7 12:35:30.646: INFO: Got endpoints: latency-svc-q8ml2 [1.242472667s]
Jul  7 12:35:30.685: INFO: Created: latency-svc-b9mmf
Jul  7 12:35:30.710: INFO: Got endpoints: latency-svc-b9mmf [1.241446728s]
Jul  7 12:35:30.783: INFO: Created: latency-svc-xjv7p
Jul  7 12:35:30.787: INFO: Got endpoints: latency-svc-xjv7p [1.195244771s]
Jul  7 12:35:30.847: INFO: Created: latency-svc-4x25s
Jul  7 12:35:30.866: INFO: Got endpoints: latency-svc-4x25s [1.249170945s]
Jul  7 12:35:30.946: INFO: Created: latency-svc-wwxnh
Jul  7 12:35:30.998: INFO: Got endpoints: latency-svc-wwxnh [1.328321622s]
Jul  7 12:35:30.998: INFO: Created: latency-svc-7nzxn
Jul  7 12:35:31.107: INFO: Got endpoints: latency-svc-7nzxn [1.349113477s]
Jul  7 12:35:31.109: INFO: Created: latency-svc-sd5fd
Jul  7 12:35:31.120: INFO: Got endpoints: latency-svc-sd5fd [1.305635184s]
Jul  7 12:35:31.163: INFO: Created: latency-svc-w47gc
Jul  7 12:35:31.168: INFO: Got endpoints: latency-svc-w47gc [1.273542495s]
Jul  7 12:35:31.202: INFO: Created: latency-svc-t6tsb
Jul  7 12:35:31.292: INFO: Got endpoints: latency-svc-t6tsb [1.354824172s]
Jul  7 12:35:31.294: INFO: Created: latency-svc-tcggx
Jul  7 12:35:31.301: INFO: Got endpoints: latency-svc-tcggx [1.336513892s]
Jul  7 12:35:31.349: INFO: Created: latency-svc-m8zpm
Jul  7 12:35:31.367: INFO: Got endpoints: latency-svc-m8zpm [1.324280915s]
Jul  7 12:35:31.448: INFO: Created: latency-svc-5fz9c
Jul  7 12:35:31.451: INFO: Got endpoints: latency-svc-5fz9c [1.327579605s]
Jul  7 12:35:31.523: INFO: Created: latency-svc-nkgrr
Jul  7 12:35:31.639: INFO: Got endpoints: latency-svc-nkgrr [1.404624818s]
Jul  7 12:35:31.642: INFO: Created: latency-svc-h2wdw
Jul  7 12:35:31.656: INFO: Got endpoints: latency-svc-h2wdw [1.26154618s]
Jul  7 12:35:31.676: INFO: Created: latency-svc-fsrlq
Jul  7 12:35:31.698: INFO: Got endpoints: latency-svc-fsrlq [1.213796699s]
Jul  7 12:35:31.717: INFO: Created: latency-svc-xcwdf
Jul  7 12:35:31.728: INFO: Got endpoints: latency-svc-xcwdf [1.082100866s]
Jul  7 12:35:31.825: INFO: Created: latency-svc-jztfc
Jul  7 12:35:31.870: INFO: Got endpoints: latency-svc-jztfc [1.160403959s]
Jul  7 12:35:31.913: INFO: Created: latency-svc-qjfx4
Jul  7 12:35:31.993: INFO: Got endpoints: latency-svc-qjfx4 [1.205937923s]
Jul  7 12:35:31.996: INFO: Created: latency-svc-bfh87
Jul  7 12:35:32.016: INFO: Got endpoints: latency-svc-bfh87 [1.150816452s]
Jul  7 12:35:32.042: INFO: Created: latency-svc-htsns
Jul  7 12:35:32.052: INFO: Got endpoints: latency-svc-htsns [1.054545016s]
Jul  7 12:35:32.080: INFO: Created: latency-svc-mgxhh
Jul  7 12:35:32.148: INFO: Got endpoints: latency-svc-mgxhh [1.041509396s]
Jul  7 12:35:32.152: INFO: Created: latency-svc-wqzzx
Jul  7 12:35:32.182: INFO: Got endpoints: latency-svc-wqzzx [1.061997567s]
Jul  7 12:35:32.245: INFO: Created: latency-svc-jxdf9
Jul  7 12:35:32.340: INFO: Got endpoints: latency-svc-jxdf9 [1.171671195s]
Jul  7 12:35:32.344: INFO: Created: latency-svc-mk9fv
Jul  7 12:35:32.353: INFO: Got endpoints: latency-svc-mk9fv [1.060748597s]
Jul  7 12:35:32.405: INFO: Created: latency-svc-zsqpm
Jul  7 12:35:32.420: INFO: Got endpoints: latency-svc-zsqpm [1.118956535s]
Jul  7 12:35:32.496: INFO: Created: latency-svc-wwqk5
Jul  7 12:35:32.539: INFO: Got endpoints: latency-svc-wwqk5 [1.1715795s]
Jul  7 12:35:32.539: INFO: Created: latency-svc-qzqh7
Jul  7 12:35:32.569: INFO: Got endpoints: latency-svc-qzqh7 [1.117652254s]
Jul  7 12:35:32.646: INFO: Created: latency-svc-5cqq8
Jul  7 12:35:32.650: INFO: Got endpoints: latency-svc-5cqq8 [1.010435584s]
Jul  7 12:35:32.693: INFO: Created: latency-svc-mx5g9
Jul  7 12:35:32.709: INFO: Got endpoints: latency-svc-mx5g9 [1.052815047s]
Jul  7 12:35:32.728: INFO: Created: latency-svc-29l7z
Jul  7 12:35:32.799: INFO: Got endpoints: latency-svc-29l7z [1.100857264s]
Jul  7 12:35:32.812: INFO: Created: latency-svc-9vnfx
Jul  7 12:35:32.829: INFO: Got endpoints: latency-svc-9vnfx [1.101355759s]
Jul  7 12:35:32.881: INFO: Created: latency-svc-6x9kt
Jul  7 12:35:32.889: INFO: Got endpoints: latency-svc-6x9kt [1.019182677s]
Jul  7 12:35:32.958: INFO: Created: latency-svc-hn24k
Jul  7 12:35:32.967: INFO: Got endpoints: latency-svc-hn24k [974.26416ms]
Jul  7 12:35:32.998: INFO: Created: latency-svc-vv7z4
Jul  7 12:35:33.016: INFO: Got endpoints: latency-svc-vv7z4 [999.136313ms]
Jul  7 12:35:33.040: INFO: Created: latency-svc-xl7fs
Jul  7 12:35:33.125: INFO: Got endpoints: latency-svc-xl7fs [1.072434598s]
Jul  7 12:35:33.148: INFO: Created: latency-svc-7k7f2
Jul  7 12:35:33.191: INFO: Got endpoints: latency-svc-7k7f2 [1.042036853s]
Jul  7 12:35:33.305: INFO: Created: latency-svc-x7dwm
Jul  7 12:35:33.308: INFO: Got endpoints: latency-svc-x7dwm [1.12605816s]
Jul  7 12:35:33.358: INFO: Created: latency-svc-8kkv6
Jul  7 12:35:33.377: INFO: Got endpoints: latency-svc-8kkv6 [1.036416817s]
Jul  7 12:35:33.400: INFO: Created: latency-svc-xjhkl
Jul  7 12:35:33.502: INFO: Got endpoints: latency-svc-xjhkl [1.149012425s]
Jul  7 12:35:33.504: INFO: Created: latency-svc-jlmcr
Jul  7 12:35:33.514: INFO: Got endpoints: latency-svc-jlmcr [1.09415247s]
Jul  7 12:35:33.534: INFO: Created: latency-svc-pr9x8
Jul  7 12:35:33.552: INFO: Got endpoints: latency-svc-pr9x8 [1.012965524s]
Jul  7 12:35:33.670: INFO: Created: latency-svc-qz2cs
Jul  7 12:35:33.674: INFO: Got endpoints: latency-svc-qz2cs [1.104667496s]
Jul  7 12:35:33.744: INFO: Created: latency-svc-vsclz
Jul  7 12:35:33.761: INFO: Got endpoints: latency-svc-vsclz [1.111101437s]
Jul  7 12:35:33.831: INFO: Created: latency-svc-c9v4c
Jul  7 12:35:33.839: INFO: Got endpoints: latency-svc-c9v4c [1.130620122s]
Jul  7 12:35:33.870: INFO: Created: latency-svc-c4xc5
Jul  7 12:35:33.903: INFO: Got endpoints: latency-svc-c4xc5 [1.103972594s]
Jul  7 12:35:34.005: INFO: Created: latency-svc-jcxq2
Jul  7 12:35:34.008: INFO: Got endpoints: latency-svc-jcxq2 [1.178838188s]
Jul  7 12:35:34.056: INFO: Created: latency-svc-hrlpm
Jul  7 12:35:34.104: INFO: Got endpoints: latency-svc-hrlpm [1.214620936s]
Jul  7 12:35:34.161: INFO: Created: latency-svc-b4296
Jul  7 12:35:34.170: INFO: Got endpoints: latency-svc-b4296 [1.202973502s]
Jul  7 12:35:34.210: INFO: Created: latency-svc-hxpqd
Jul  7 12:35:34.224: INFO: Got endpoints: latency-svc-hxpqd [1.208615184s]
Jul  7 12:35:34.251: INFO: Created: latency-svc-dnwch
Jul  7 12:35:34.352: INFO: Got endpoints: latency-svc-dnwch [1.227225511s]
Jul  7 12:35:34.354: INFO: Created: latency-svc-8fk25
Jul  7 12:35:34.390: INFO: Got endpoints: latency-svc-8fk25 [1.199289694s]
Jul  7 12:35:34.429: INFO: Created: latency-svc-xxcjv
Jul  7 12:35:34.447: INFO: Got endpoints: latency-svc-xxcjv [1.13863197s]
Jul  7 12:35:34.502: INFO: Created: latency-svc-tzvmv
Jul  7 12:35:34.513: INFO: Got endpoints: latency-svc-tzvmv [1.136426696s]
Jul  7 12:35:34.534: INFO: Created: latency-svc-wwj8h
Jul  7 12:35:34.572: INFO: Got endpoints: latency-svc-wwj8h [1.070234517s]
Jul  7 12:35:34.664: INFO: Created: latency-svc-s6zwk
Jul  7 12:35:34.667: INFO: Got endpoints: latency-svc-s6zwk [1.153155319s]
Jul  7 12:35:34.696: INFO: Created: latency-svc-h9kp7
Jul  7 12:35:34.718: INFO: Got endpoints: latency-svc-h9kp7 [1.166050079s]
Jul  7 12:35:34.755: INFO: Created: latency-svc-s5ct9
Jul  7 12:35:34.825: INFO: Got endpoints: latency-svc-s5ct9 [1.151306996s]
Jul  7 12:35:34.860: INFO: Created: latency-svc-cv57n
Jul  7 12:35:34.874: INFO: Got endpoints: latency-svc-cv57n [1.112850838s]
Jul  7 12:35:34.897: INFO: Created: latency-svc-lcvvg
Jul  7 12:35:34.916: INFO: Got endpoints: latency-svc-lcvvg [1.076695889s]
Jul  7 12:35:34.988: INFO: Created: latency-svc-jhqtp
Jul  7 12:35:34.991: INFO: Got endpoints: latency-svc-jhqtp [1.088728088s]
Jul  7 12:35:35.026: INFO: Created: latency-svc-zgkgs
Jul  7 12:35:35.042: INFO: Got endpoints: latency-svc-zgkgs [1.034291436s]
Jul  7 12:35:35.062: INFO: Created: latency-svc-xkv2l
Jul  7 12:35:35.142: INFO: Got endpoints: latency-svc-xkv2l [1.038072908s]
Jul  7 12:35:35.161: INFO: Created: latency-svc-9vcmx
Jul  7 12:35:35.203: INFO: Got endpoints: latency-svc-9vcmx [1.032143075s]
Jul  7 12:35:35.236: INFO: Created: latency-svc-2vg52
Jul  7 12:35:35.322: INFO: Got endpoints: latency-svc-2vg52 [1.09765638s]
Jul  7 12:35:35.325: INFO: Created: latency-svc-tl99q
Jul  7 12:35:35.352: INFO: Got endpoints: latency-svc-tl99q [999.9452ms]
Jul  7 12:35:35.410: INFO: Created: latency-svc-nl86p
Jul  7 12:35:35.520: INFO: Got endpoints: latency-svc-nl86p [1.130049313s]
Jul  7 12:35:35.545: INFO: Created: latency-svc-ghs5l
Jul  7 12:35:35.581: INFO: Got endpoints: latency-svc-ghs5l [1.133750942s]
Jul  7 12:35:35.665: INFO: Created: latency-svc-xhv9w
Jul  7 12:35:35.680: INFO: Got endpoints: latency-svc-xhv9w [1.166458422s]
Jul  7 12:35:35.716: INFO: Created: latency-svc-88l6q
Jul  7 12:35:35.725: INFO: Got endpoints: latency-svc-88l6q [1.15279378s]
Jul  7 12:35:35.761: INFO: Created: latency-svc-zg5fz
Jul  7 12:35:35.855: INFO: Got endpoints: latency-svc-zg5fz [1.18769215s]
Jul  7 12:35:35.857: INFO: Created: latency-svc-rp6kn
Jul  7 12:35:35.892: INFO: Got endpoints: latency-svc-rp6kn [1.174602496s]
Jul  7 12:35:35.938: INFO: Created: latency-svc-fwtrf
Jul  7 12:35:35.954: INFO: Got endpoints: latency-svc-fwtrf [1.128947813s]
Jul  7 12:35:36.025: INFO: Created: latency-svc-47v5b
Jul  7 12:35:36.038: INFO: Got endpoints: latency-svc-47v5b [1.164231363s]
Jul  7 12:35:36.063: INFO: Created: latency-svc-rftv5
Jul  7 12:35:36.080: INFO: Got endpoints: latency-svc-rftv5 [1.164048318s]
Jul  7 12:35:36.114: INFO: Created: latency-svc-wmnzc
Jul  7 12:35:36.190: INFO: Got endpoints: latency-svc-wmnzc [1.198911982s]
Jul  7 12:35:36.193: INFO: Created: latency-svc-wch5w
Jul  7 12:35:36.201: INFO: Got endpoints: latency-svc-wch5w [1.158365077s]
Jul  7 12:35:36.244: INFO: Created: latency-svc-k99br
Jul  7 12:35:36.261: INFO: Got endpoints: latency-svc-k99br [1.11870993s]
Jul  7 12:35:36.285: INFO: Created: latency-svc-g8ptp
Jul  7 12:35:36.359: INFO: Got endpoints: latency-svc-g8ptp [1.156049938s]
Jul  7 12:35:36.421: INFO: Created: latency-svc-fxzzd
Jul  7 12:35:36.442: INFO: Got endpoints: latency-svc-fxzzd [1.119453974s]
Jul  7 12:35:36.514: INFO: Created: latency-svc-xbhv8
Jul  7 12:35:36.526: INFO: Got endpoints: latency-svc-xbhv8 [1.173457532s]
Jul  7 12:35:36.566: INFO: Created: latency-svc-q5nvv
Jul  7 12:35:36.612: INFO: Got endpoints: latency-svc-q5nvv [1.091825044s]
Jul  7 12:35:36.660: INFO: Created: latency-svc-mjnjv
Jul  7 12:35:36.670: INFO: Got endpoints: latency-svc-mjnjv [1.089362267s]
Jul  7 12:35:36.694: INFO: Created: latency-svc-tkqj2
Jul  7 12:35:36.712: INFO: Got endpoints: latency-svc-tkqj2 [1.032726022s]
Jul  7 12:35:36.735: INFO: Created: latency-svc-62r54
Jul  7 12:35:36.748: INFO: Got endpoints: latency-svc-62r54 [1.023161976s]
Jul  7 12:35:36.879: INFO: Created: latency-svc-vmxc9
Jul  7 12:35:36.910: INFO: Got endpoints: latency-svc-vmxc9 [1.055314154s]
Jul  7 12:35:36.944: INFO: Created: latency-svc-2nfj6
Jul  7 12:35:36.993: INFO: Got endpoints: latency-svc-2nfj6 [1.100346141s]
Jul  7 12:35:37.005: INFO: Created: latency-svc-ms2gr
Jul  7 12:35:37.025: INFO: Got endpoints: latency-svc-ms2gr [1.071202242s]
Jul  7 12:35:37.060: INFO: Created: latency-svc-szxpb
Jul  7 12:35:37.085: INFO: Got endpoints: latency-svc-szxpb [1.04661736s]
Jul  7 12:35:37.162: INFO: Created: latency-svc-s2qmm
Jul  7 12:35:37.169: INFO: Got endpoints: latency-svc-s2qmm [1.088597759s]
Jul  7 12:35:37.259: INFO: Created: latency-svc-8kpgh
Jul  7 12:35:37.334: INFO: Got endpoints: latency-svc-8kpgh [1.144045438s]
Jul  7 12:35:37.337: INFO: Created: latency-svc-67r6g
Jul  7 12:35:37.350: INFO: Got endpoints: latency-svc-67r6g [1.149117424s]
Jul  7 12:35:37.387: INFO: Created: latency-svc-qjjlm
Jul  7 12:35:37.404: INFO: Got endpoints: latency-svc-qjjlm [1.142967767s]
Jul  7 12:35:37.520: INFO: Created: latency-svc-z2v2n
Jul  7 12:35:37.522: INFO: Got endpoints: latency-svc-z2v2n [1.16360929s]
Jul  7 12:35:37.603: INFO: Created: latency-svc-k6g8d
Jul  7 12:35:37.682: INFO: Got endpoints: latency-svc-k6g8d [1.239912717s]
Jul  7 12:35:37.683: INFO: Created: latency-svc-z9m4j
Jul  7 12:35:37.692: INFO: Got endpoints: latency-svc-z9m4j [1.166125365s]
Jul  7 12:35:37.723: INFO: Created: latency-svc-mgtlh
Jul  7 12:35:37.742: INFO: Got endpoints: latency-svc-mgtlh [1.129664649s]
Jul  7 12:35:37.771: INFO: Created: latency-svc-dc5lw
Jul  7 12:35:37.849: INFO: Got endpoints: latency-svc-dc5lw [1.179041313s]
Jul  7 12:35:37.851: INFO: Created: latency-svc-wg5pl
Jul  7 12:35:37.861: INFO: Got endpoints: latency-svc-wg5pl [1.148886622s]
Jul  7 12:35:37.897: INFO: Created: latency-svc-66crd
Jul  7 12:35:37.916: INFO: Got endpoints: latency-svc-66crd [1.16717802s]
Jul  7 12:35:37.948: INFO: Created: latency-svc-52srb
Jul  7 12:35:38.029: INFO: Got endpoints: latency-svc-52srb [1.118421997s]
Jul  7 12:35:38.034: INFO: Created: latency-svc-n48jv
Jul  7 12:35:38.041: INFO: Got endpoints: latency-svc-n48jv [1.048584606s]
Jul  7 12:35:38.075: INFO: Created: latency-svc-gkshm
Jul  7 12:35:38.102: INFO: Got endpoints: latency-svc-gkshm [1.076784068s]
Jul  7 12:35:38.203: INFO: Created: latency-svc-5skkf
Jul  7 12:35:38.206: INFO: Got endpoints: latency-svc-5skkf [1.120845449s]
Jul  7 12:35:38.239: INFO: Created: latency-svc-phq9n
Jul  7 12:35:38.289: INFO: Got endpoints: latency-svc-phq9n [1.119888037s]
Jul  7 12:35:38.370: INFO: Created: latency-svc-rfk2d
Jul  7 12:35:38.371: INFO: Got endpoints: latency-svc-rfk2d [1.036063213s]
Jul  7 12:35:38.449: INFO: Created: latency-svc-gzdg9
Jul  7 12:35:38.540: INFO: Created: latency-svc-f96bn
Jul  7 12:35:38.596: INFO: Got endpoints: latency-svc-gzdg9 [307.033397ms]
Jul  7 12:35:38.596: INFO: Created: latency-svc-vjvjh
Jul  7 12:35:38.615: INFO: Got endpoints: latency-svc-vjvjh [1.210788223s]
Jul  7 12:35:38.688: INFO: Got endpoints: latency-svc-f96bn [1.337709323s]
Jul  7 12:35:38.688: INFO: Created: latency-svc-tzjkp
Jul  7 12:35:38.692: INFO: Got endpoints: latency-svc-tzjkp [1.169398481s]
Jul  7 12:35:38.725: INFO: Created: latency-svc-q8h74
Jul  7 12:35:38.751: INFO: Got endpoints: latency-svc-q8h74 [1.069925033s]
Jul  7 12:35:38.843: INFO: Created: latency-svc-n6vpb
Jul  7 12:35:38.846: INFO: Got endpoints: latency-svc-n6vpb [1.154086419s]
Jul  7 12:35:38.893: INFO: Created: latency-svc-8855b
Jul  7 12:35:38.916: INFO: Got endpoints: latency-svc-8855b [1.174074381s]
Jul  7 12:35:38.941: INFO: Created: latency-svc-xbwpg
Jul  7 12:35:38.999: INFO: Got endpoints: latency-svc-xbwpg [1.149380532s]
Jul  7 12:35:39.019: INFO: Created: latency-svc-nbsp5
Jul  7 12:35:39.036: INFO: Got endpoints: latency-svc-nbsp5 [1.174291346s]
Jul  7 12:35:39.074: INFO: Created: latency-svc-ngrjh
Jul  7 12:35:39.086: INFO: Got endpoints: latency-svc-ngrjh [1.170615418s]
Jul  7 12:35:39.182: INFO: Created: latency-svc-896rr
Jul  7 12:35:39.192: INFO: Got endpoints: latency-svc-896rr [1.163228351s]
Jul  7 12:35:39.224: INFO: Created: latency-svc-8g2c6
Jul  7 12:35:39.235: INFO: Got endpoints: latency-svc-8g2c6 [1.193465984s]
Jul  7 12:35:39.259: INFO: Created: latency-svc-ln7lw
Jul  7 12:35:39.346: INFO: Got endpoints: latency-svc-ln7lw [1.244193291s]
Jul  7 12:35:39.370: INFO: Created: latency-svc-w749p
Jul  7 12:35:39.385: INFO: Got endpoints: latency-svc-w749p [1.178963738s]
Jul  7 12:35:39.419: INFO: Created: latency-svc-rc5pz
Jul  7 12:35:39.433: INFO: Got endpoints: latency-svc-rc5pz [1.0625812s]
Jul  7 12:35:39.508: INFO: Created: latency-svc-25pds
Jul  7 12:35:39.538: INFO: Got endpoints: latency-svc-25pds [941.881747ms]
Jul  7 12:35:39.568: INFO: Created: latency-svc-6rmbw
Jul  7 12:35:39.584: INFO: Got endpoints: latency-svc-6rmbw [968.655036ms]
Jul  7 12:35:39.607: INFO: Created: latency-svc-2rp92
Jul  7 12:35:39.669: INFO: Got endpoints: latency-svc-2rp92 [981.547348ms]
Jul  7 12:35:39.671: INFO: Created: latency-svc-ht7ls
Jul  7 12:35:39.697: INFO: Got endpoints: latency-svc-ht7ls [1.005295702s]
Jul  7 12:35:39.736: INFO: Created: latency-svc-xsrxl
Jul  7 12:35:39.752: INFO: Got endpoints: latency-svc-xsrxl [1.000523897s]
Jul  7 12:35:39.812: INFO: Created: latency-svc-7spw7
Jul  7 12:35:39.836: INFO: Got endpoints: latency-svc-7spw7 [990.364265ms]
Jul  7 12:35:39.871: INFO: Created: latency-svc-lm2vr
Jul  7 12:35:39.903: INFO: Got endpoints: latency-svc-lm2vr [987.371082ms]
Jul  7 12:35:39.951: INFO: Created: latency-svc-5hqrr
Jul  7 12:35:39.963: INFO: Got endpoints: latency-svc-5hqrr [963.940013ms]
Jul  7 12:35:39.992: INFO: Created: latency-svc-flpmb
Jul  7 12:35:39.995: INFO: Got endpoints: latency-svc-flpmb [959.089797ms]
Jul  7 12:35:40.037: INFO: Created: latency-svc-nf9tx
Jul  7 12:35:40.124: INFO: Got endpoints: latency-svc-nf9tx [1.038049998s]
Jul  7 12:35:40.155: INFO: Created: latency-svc-2llmp
Jul  7 12:35:40.173: INFO: Got endpoints: latency-svc-2llmp [981.060337ms]
Jul  7 12:35:40.214: INFO: Created: latency-svc-sd9nb
Jul  7 12:35:40.286: INFO: Got endpoints: latency-svc-sd9nb [1.051386669s]
Jul  7 12:35:40.322: INFO: Created: latency-svc-k7cww
Jul  7 12:35:40.336: INFO: Got endpoints: latency-svc-k7cww [989.33795ms]
Jul  7 12:35:40.358: INFO: Created: latency-svc-wmvfz
Jul  7 12:35:40.372: INFO: Got endpoints: latency-svc-wmvfz [986.989807ms]
Jul  7 12:35:40.473: INFO: Created: latency-svc-k9stv
Jul  7 12:35:40.476: INFO: Got endpoints: latency-svc-k9stv [1.042507238s]
Jul  7 12:35:40.556: INFO: Created: latency-svc-lhtt5
Jul  7 12:35:40.627: INFO: Got endpoints: latency-svc-lhtt5 [1.089425525s]
Jul  7 12:35:40.643: INFO: Created: latency-svc-g4hts
Jul  7 12:35:40.672: INFO: Got endpoints: latency-svc-g4hts [1.088772862s]
Jul  7 12:35:40.715: INFO: Created: latency-svc-gszbd
Jul  7 12:35:40.795: INFO: Got endpoints: latency-svc-gszbd [1.125696747s]
Jul  7 12:35:40.797: INFO: Created: latency-svc-tqzjb
Jul  7 12:35:40.804: INFO: Got endpoints: latency-svc-tqzjb [1.107153167s]
Jul  7 12:35:40.829: INFO: Created: latency-svc-wmxcg
Jul  7 12:35:40.847: INFO: Got endpoints: latency-svc-wmxcg [1.094976875s]
Jul  7 12:35:40.871: INFO: Created: latency-svc-pdmgb
Jul  7 12:35:40.883: INFO: Got endpoints: latency-svc-pdmgb [1.046706466s]
Jul  7 12:35:40.946: INFO: Created: latency-svc-rtxpf
Jul  7 12:35:40.949: INFO: Got endpoints: latency-svc-rtxpf [1.045345444s]
Jul  7 12:35:40.976: INFO: Created: latency-svc-mkkch
Jul  7 12:35:40.992: INFO: Got endpoints: latency-svc-mkkch [1.02940983s]
Jul  7 12:35:41.018: INFO: Created: latency-svc-5jhg8
Jul  7 12:35:41.028: INFO: Got endpoints: latency-svc-5jhg8 [1.032997039s]
Jul  7 12:35:41.095: INFO: Created: latency-svc-8w4cf
Jul  7 12:35:41.112: INFO: Got endpoints: latency-svc-8w4cf [987.912504ms]
Jul  7 12:35:41.147: INFO: Created: latency-svc-kqgx5
Jul  7 12:35:41.262: INFO: Got endpoints: latency-svc-kqgx5 [1.088838839s]
Jul  7 12:35:41.264: INFO: Created: latency-svc-gstcv
Jul  7 12:35:41.291: INFO: Got endpoints: latency-svc-gstcv [1.00410451s]
Jul  7 12:35:41.318: INFO: Created: latency-svc-2bzfw
Jul  7 12:35:41.345: INFO: Got endpoints: latency-svc-2bzfw [1.00886775s]
Jul  7 12:35:41.412: INFO: Created: latency-svc-wj8zs
Jul  7 12:35:41.441: INFO: Created: latency-svc-fgnfl
Jul  7 12:35:41.441: INFO: Got endpoints: latency-svc-wj8zs [1.069084605s]
Jul  7 12:35:41.486: INFO: Got endpoints: latency-svc-fgnfl [1.010006696s]
Jul  7 12:35:41.610: INFO: Created: latency-svc-2s87c
Jul  7 12:35:41.612: INFO: Got endpoints: latency-svc-2s87c [985.068944ms]
Jul  7 12:35:41.666: INFO: Created: latency-svc-jvq6c
Jul  7 12:35:41.682: INFO: Got endpoints: latency-svc-jvq6c [1.009220193s]
Jul  7 12:35:41.766: INFO: Created: latency-svc-wzxvw
Jul  7 12:35:41.768: INFO: Got endpoints: latency-svc-wzxvw [972.568097ms]
Jul  7 12:35:41.804: INFO: Created: latency-svc-g2clq
Jul  7 12:35:41.820: INFO: Got endpoints: latency-svc-g2clq [1.015890052s]
Jul  7 12:35:41.858: INFO: Created: latency-svc-5rmd4
Jul  7 12:35:41.946: INFO: Got endpoints: latency-svc-5rmd4 [1.098402979s]
Jul  7 12:35:41.947: INFO: Created: latency-svc-cwwtp
Jul  7 12:35:41.958: INFO: Got endpoints: latency-svc-cwwtp [1.07455342s]
Jul  7 12:35:41.980: INFO: Created: latency-svc-tpwht
Jul  7 12:35:41.994: INFO: Got endpoints: latency-svc-tpwht [1.045876831s]
Jul  7 12:35:42.017: INFO: Created: latency-svc-5z29x
Jul  7 12:35:42.025: INFO: Got endpoints: latency-svc-5z29x [1.0322745s]
Jul  7 12:35:42.119: INFO: Created: latency-svc-w94n9
Jul  7 12:35:42.122: INFO: Got endpoints: latency-svc-w94n9 [1.094722694s]
Jul  7 12:35:42.155: INFO: Created: latency-svc-pghs2
Jul  7 12:35:42.169: INFO: Got endpoints: latency-svc-pghs2 [1.056929913s]
Jul  7 12:35:42.191: INFO: Created: latency-svc-6rjth
Jul  7 12:35:42.199: INFO: Got endpoints: latency-svc-6rjth [937.054382ms]
Jul  7 12:35:42.293: INFO: Created: latency-svc-s8f64
Jul  7 12:35:42.296: INFO: Got endpoints: latency-svc-s8f64 [1.005484709s]
Jul  7 12:35:42.325: INFO: Created: latency-svc-6769s
Jul  7 12:35:42.344: INFO: Got endpoints: latency-svc-6769s [998.976693ms]
Jul  7 12:35:42.370: INFO: Created: latency-svc-x4s6h
Jul  7 12:35:42.386: INFO: Got endpoints: latency-svc-x4s6h [945.04055ms]
Jul  7 12:35:42.472: INFO: Created: latency-svc-zhlwl
Jul  7 12:35:42.482: INFO: Got endpoints: latency-svc-zhlwl [996.237037ms]
Jul  7 12:35:42.531: INFO: Created: latency-svc-dsbcm
Jul  7 12:35:42.561: INFO: Got endpoints: latency-svc-dsbcm [948.258302ms]
Jul  7 12:35:42.652: INFO: Created: latency-svc-4mt2j
Jul  7 12:35:42.686: INFO: Got endpoints: latency-svc-4mt2j [1.00414275s]
Jul  7 12:35:42.743: INFO: Created: latency-svc-w5jbm
Jul  7 12:35:42.801: INFO: Got endpoints: latency-svc-w5jbm [1.033463207s]
Jul  7 12:35:42.803: INFO: Created: latency-svc-k7wbn
Jul  7 12:35:42.813: INFO: Got endpoints: latency-svc-k7wbn [992.791778ms]
Jul  7 12:35:42.878: INFO: Created: latency-svc-vgk22
Jul  7 12:35:42.898: INFO: Got endpoints: latency-svc-vgk22 [952.838865ms]
Jul  7 12:35:42.969: INFO: Created: latency-svc-5gt4m
Jul  7 12:35:42.988: INFO: Got endpoints: latency-svc-5gt4m [1.030108386s]
Jul  7 12:35:42.988: INFO: Latencies: [133.783675ms 191.599515ms 265.334435ms 302.478932ms 307.033397ms 345.673568ms 411.833459ms 486.560938ms 560.939057ms 609.850355ms 705.527316ms 784.951717ms 849.106616ms 894.810356ms 936.933604ms 937.054382ms 941.881747ms 941.919431ms 944.151488ms 945.04055ms 946.093505ms 948.258302ms 952.838865ms 958.397115ms 959.089797ms 961.909799ms 963.940013ms 968.655036ms 972.568097ms 974.26416ms 974.28976ms 981.060337ms 981.547348ms 983.381434ms 985.068944ms 986.989807ms 987.371082ms 987.912504ms 989.33795ms 990.364265ms 992.663643ms 992.791778ms 996.237037ms 998.976693ms 999.136313ms 999.9452ms 1.000523897s 1.00410451s 1.00414275s 1.005295702s 1.005484709s 1.007544789s 1.00886775s 1.009220193s 1.009979376s 1.010006696s 1.010154431s 1.010435584s 1.012965524s 1.015647413s 1.015890052s 1.019182677s 1.019303366s 1.023161976s 1.02940983s 1.030108386s 1.032143075s 1.0322745s 1.032726022s 1.032997039s 1.033463207s 1.034291436s 1.036063213s 1.036416817s 1.038049998s 1.038072908s 1.041509396s 1.042036853s 1.042507238s 1.045345444s 1.045876831s 1.04661736s 1.046706466s 1.048584606s 1.051386669s 1.052815047s 1.054545016s 1.055314154s 1.056929913s 1.060748597s 1.061997567s 1.0625812s 1.069084605s 1.069925033s 1.070234517s 1.071202242s 1.072434598s 1.07455342s 1.076695889s 1.076784068s 1.082100866s 1.088597759s 1.088728088s 1.088772862s 1.088838839s 1.089362267s 1.089425525s 1.091825044s 1.09415247s 1.094722694s 1.094976875s 1.09765638s 1.098402979s 1.100346141s 1.100857264s 1.101355759s 1.103972594s 1.104667496s 1.106494352s 1.107153167s 1.111101437s 1.112850838s 1.117652254s 1.118421997s 1.11870993s 1.118956535s 1.119453974s 1.119888037s 1.120845449s 1.125696747s 1.12605816s 1.128947813s 1.129664649s 1.130049313s 1.130620122s 1.133750942s 1.136426696s 1.13863197s 1.142967767s 1.144045438s 1.148886622s 1.149012425s 1.149117424s 1.149380532s 1.150816452s 1.151306996s 1.15279378s 1.153155319s 1.154086419s 1.154269618s 1.156049938s 1.158365077s 1.160403959s 1.163228351s 1.16360929s 1.164048318s 1.164231363s 1.166050079s 1.166125365s 1.166458422s 1.16717802s 1.169398481s 1.170615418s 1.1715795s 1.171671195s 1.173457532s 1.174074381s 1.174291346s 1.174602496s 1.178838188s 1.178963738s 1.179041313s 1.18769215s 1.193465984s 1.195244771s 1.198911982s 1.199289694s 1.202973502s 1.205937923s 1.208615184s 1.210788223s 1.213796699s 1.214620936s 1.227225511s 1.239912717s 1.241446728s 1.242472667s 1.244193291s 1.249170945s 1.26154618s 1.273542495s 1.305635184s 1.324280915s 1.327579605s 1.328321622s 1.336513892s 1.337709323s 1.349113477s 1.354824172s 1.404624818s]
Jul  7 12:35:42.988: INFO: 50 %ile: 1.082100866s
Jul  7 12:35:42.988: INFO: 90 %ile: 1.210788223s
Jul  7 12:35:42.988: INFO: 99 %ile: 1.354824172s
Jul  7 12:35:42.988: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:35:42.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-8xp9c" for this suite.
Jul  7 12:36:17.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:36:17.038: INFO: namespace: e2e-tests-svc-latency-8xp9c, resource: bindings, ignored listing per whitelist
Jul  7 12:36:17.084: INFO: namespace e2e-tests-svc-latency-8xp9c deletion completed in 34.082510446s

• [SLOW TEST:53.129 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:36:17.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  7 12:36:17.223: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jul  7 12:36:22.227: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul  7 12:36:22.227: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jul  7 12:36:24.231: INFO: Creating deployment "test-rollover-deployment"
Jul  7 12:36:24.248: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jul  7 12:36:26.254: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jul  7 12:36:26.258: INFO: Ensure that both replica sets have 1 created replica
Jul  7 12:36:26.263: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jul  7 12:36:26.269: INFO: Updating deployment test-rollover-deployment
Jul  7 12:36:26.269: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jul  7 12:36:28.306: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jul  7 12:36:28.312: INFO: Make sure deployment "test-rollover-deployment" is complete
Jul  7 12:36:28.317: INFO: all replica sets need to contain the pod-template-hash label
Jul  7 12:36:28.317: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722184, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722184, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722186, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722184, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 12:36:30.389: INFO: all replica sets need to contain the pod-template-hash label
Jul  7 12:36:30.389: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722184, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722184, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722186, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722184, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 12:36:32.326: INFO: all replica sets need to contain the pod-template-hash label
Jul  7 12:36:32.326: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722184, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722184, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722191, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722184, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 12:36:34.365: INFO: all replica sets need to contain the pod-template-hash label
Jul  7 12:36:34.365: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722184, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722184, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722191, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722184, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 12:36:36.344: INFO: all replica sets need to contain the pod-template-hash label
Jul  7 12:36:36.344: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722184, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722184, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722191, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722184, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 12:36:38.324: INFO: all replica sets need to contain the pod-template-hash label
Jul  7 12:36:38.324: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722184, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722184, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722191, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722184, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 12:36:40.325: INFO: all replica sets need to contain the pod-template-hash label
Jul  7 12:36:40.325: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722184, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722184, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722191, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729722184, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 12:36:42.454: INFO: 
Jul  7 12:36:42.454: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul  7 12:36:42.460: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-x2bgc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x2bgc/deployments/test-rollover-deployment,UID:780d39a1-c04e-11ea-a300-0242ac110004,ResourceVersion:612010,Generation:2,CreationTimestamp:2020-07-07 12:36:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-07-07 12:36:24 +0000 UTC 2020-07-07 12:36:24 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-07-07 12:36:41 +0000 UTC 2020-07-07 12:36:24 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jul  7 12:36:42.463: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-x2bgc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x2bgc/replicasets/test-rollover-deployment-5b8479fdb6,UID:79440f36-c04e-11ea-a300-0242ac110004,ResourceVersion:612000,Generation:2,CreationTimestamp:2020-07-07 12:36:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 780d39a1-c04e-11ea-a300-0242ac110004 0xc001f19637 0xc001f19638}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jul  7 12:36:42.463: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jul  7 12:36:42.463: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-x2bgc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x2bgc/replicasets/test-rollover-controller,UID:73dc99a4-c04e-11ea-a300-0242ac110004,ResourceVersion:612009,Generation:2,CreationTimestamp:2020-07-07 12:36:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 780d39a1-c04e-11ea-a300-0242ac110004 0xc001f19107 0xc001f19108}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  7 12:36:42.463: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-x2bgc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-x2bgc/replicasets/test-rollover-deployment-58494b7559,UID:7810c1eb-c04e-11ea-a300-0242ac110004,ResourceVersion:611963,Generation:2,CreationTimestamp:2020-07-07 12:36:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 780d39a1-c04e-11ea-a300-0242ac110004 0xc001f19507 0xc001f19508}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  7 12:36:42.466: INFO: Pod "test-rollover-deployment-5b8479fdb6-g2fwq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-g2fwq,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-x2bgc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-x2bgc/pods/test-rollover-deployment-5b8479fdb6-g2fwq,UID:795a8f5a-c04e-11ea-a300-0242ac110004,ResourceVersion:611978,Generation:0,CreationTimestamp:2020-07-07 12:36:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 79440f36-c04e-11ea-a300-0242ac110004 0xc00159ec57 0xc00159ec58}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9j7kj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9j7kj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-9j7kj true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00159ee60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00159ee80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:36:26 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:36:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:36:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 12:36:26 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.2,PodIP:10.244.1.132,StartTime:2020-07-07 12:36:26 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-07-07 12:36:30 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://575803e7efff7cc941073c2650538f0b6528e4dea5e72c855b51eb3c2b4c2d97}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:36:42.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-x2bgc" for this suite.
Jul  7 12:36:50.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:36:50.700: INFO: namespace: e2e-tests-deployment-x2bgc, resource: bindings, ignored listing per whitelist
Jul  7 12:36:50.754: INFO: namespace e2e-tests-deployment-x2bgc deletion completed in 8.284440212s

• [SLOW TEST:33.670 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:36:50.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-87eca9bc-c04e-11ea-9ad7-0242ac11001b
STEP: Creating secret with name secret-projected-all-test-volume-87eca992-c04e-11ea-9ad7-0242ac11001b
STEP: Creating a pod to test Check all projections for projected volume plugin
Jul  7 12:36:50.892: INFO: Waiting up to 5m0s for pod "projected-volume-87eca941-c04e-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-gjt4s" to be "success or failure"
Jul  7 12:36:50.896: INFO: Pod "projected-volume-87eca941-c04e-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.878932ms
Jul  7 12:36:52.988: INFO: Pod "projected-volume-87eca941-c04e-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095291485s
Jul  7 12:36:54.992: INFO: Pod "projected-volume-87eca941-c04e-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099587952s
Jul  7 12:36:56.997: INFO: Pod "projected-volume-87eca941-c04e-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.104553751s
STEP: Saw pod success
Jul  7 12:36:56.997: INFO: Pod "projected-volume-87eca941-c04e-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 12:36:57.000: INFO: Trying to get logs from node hunter-worker pod projected-volume-87eca941-c04e-11ea-9ad7-0242ac11001b container projected-all-volume-test: 
STEP: delete the pod
Jul  7 12:36:57.068: INFO: Waiting for pod projected-volume-87eca941-c04e-11ea-9ad7-0242ac11001b to disappear
Jul  7 12:36:57.094: INFO: Pod projected-volume-87eca941-c04e-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:36:57.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gjt4s" for this suite.
Jul  7 12:37:03.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:37:03.130: INFO: namespace: e2e-tests-projected-gjt4s, resource: bindings, ignored listing per whitelist
Jul  7 12:37:03.188: INFO: namespace e2e-tests-projected-gjt4s deletion completed in 6.089177254s

• [SLOW TEST:12.434 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:37:03.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  7 12:37:07.365: INFO: Waiting up to 5m0s for pod "client-envvars-91c068f5-c04e-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-pods-zgtt5" to be "success or failure"
Jul  7 12:37:07.425: INFO: Pod "client-envvars-91c068f5-c04e-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 60.490586ms
Jul  7 12:37:09.428: INFO: Pod "client-envvars-91c068f5-c04e-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063478107s
Jul  7 12:37:11.467: INFO: Pod "client-envvars-91c068f5-c04e-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.102337111s
STEP: Saw pod success
Jul  7 12:37:11.467: INFO: Pod "client-envvars-91c068f5-c04e-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 12:37:11.469: INFO: Trying to get logs from node hunter-worker pod client-envvars-91c068f5-c04e-11ea-9ad7-0242ac11001b container env3cont: 
STEP: delete the pod
Jul  7 12:37:11.629: INFO: Waiting for pod client-envvars-91c068f5-c04e-11ea-9ad7-0242ac11001b to disappear
Jul  7 12:37:11.634: INFO: Pod client-envvars-91c068f5-c04e-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:37:11.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-zgtt5" for this suite.
Jul  7 12:37:55.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:37:55.668: INFO: namespace: e2e-tests-pods-zgtt5, resource: bindings, ignored listing per whitelist
Jul  7 12:37:55.798: INFO: namespace e2e-tests-pods-zgtt5 deletion completed in 44.160358714s

• [SLOW TEST:52.610 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:37:55.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Jul  7 12:37:55.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-pjxgp'
Jul  7 12:37:58.805: INFO: stderr: ""
Jul  7 12:37:58.805: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Jul  7 12:37:59.821: INFO: Selector matched 1 pods for map[app:redis]
Jul  7 12:37:59.821: INFO: Found 0 / 1
Jul  7 12:38:01.087: INFO: Selector matched 1 pods for map[app:redis]
Jul  7 12:38:01.088: INFO: Found 0 / 1
Jul  7 12:38:01.941: INFO: Selector matched 1 pods for map[app:redis]
Jul  7 12:38:01.941: INFO: Found 0 / 1
Jul  7 12:38:02.808: INFO: Selector matched 1 pods for map[app:redis]
Jul  7 12:38:02.808: INFO: Found 1 / 1
Jul  7 12:38:02.808: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul  7 12:38:02.811: INFO: Selector matched 1 pods for map[app:redis]
Jul  7 12:38:02.811: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jul  7 12:38:02.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2kh5l redis-master --namespace=e2e-tests-kubectl-pjxgp'
Jul  7 12:38:02.918: INFO: stderr: ""
Jul  7 12:38:02.918: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 07 Jul 12:38:02.246 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Jul 12:38:02.246 # Server started, Redis version 3.2.12\n1:M 07 Jul 12:38:02.246 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Jul 12:38:02.247 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jul  7 12:38:02.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-2kh5l redis-master --namespace=e2e-tests-kubectl-pjxgp --tail=1'
Jul  7 12:38:03.027: INFO: stderr: ""
Jul  7 12:38:03.028: INFO: stdout: "1:M 07 Jul 12:38:02.247 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jul  7 12:38:03.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-2kh5l redis-master --namespace=e2e-tests-kubectl-pjxgp --limit-bytes=1'
Jul  7 12:38:03.124: INFO: stderr: ""
Jul  7 12:38:03.124: INFO: stdout: " "
STEP: exposing timestamps
Jul  7 12:38:03.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-2kh5l redis-master --namespace=e2e-tests-kubectl-pjxgp --tail=1 --timestamps'
Jul  7 12:38:03.227: INFO: stderr: ""
Jul  7 12:38:03.227: INFO: stdout: "2020-07-07T12:38:02.247074987Z 1:M 07 Jul 12:38:02.247 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jul  7 12:38:05.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-2kh5l redis-master --namespace=e2e-tests-kubectl-pjxgp --since=1s'
Jul  7 12:38:05.856: INFO: stderr: ""
Jul  7 12:38:05.856: INFO: stdout: ""
Jul  7 12:38:05.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-2kh5l redis-master --namespace=e2e-tests-kubectl-pjxgp --since=24h'
Jul  7 12:38:05.974: INFO: stderr: ""
Jul  7 12:38:05.974: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 07 Jul 12:38:02.246 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 07 Jul 12:38:02.246 # Server started, Redis version 3.2.12\n1:M 07 Jul 12:38:02.246 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 07 Jul 12:38:02.247 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Jul  7 12:38:05.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-pjxgp'
Jul  7 12:38:06.087: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  7 12:38:06.087: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jul  7 12:38:06.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-pjxgp'
Jul  7 12:38:06.193: INFO: stderr: "No resources found.\n"
Jul  7 12:38:06.194: INFO: stdout: ""
Jul  7 12:38:06.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-pjxgp -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  7 12:38:06.296: INFO: stderr: ""
Jul  7 12:38:06.296: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:38:06.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-pjxgp" for this suite.
Jul  7 12:38:28.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:38:28.475: INFO: namespace: e2e-tests-kubectl-pjxgp, resource: bindings, ignored listing per whitelist
Jul  7 12:38:28.530: INFO: namespace e2e-tests-kubectl-pjxgp deletion completed in 22.229830431s

• [SLOW TEST:32.731 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:38:28.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-c2392503-c04e-11ea-9ad7-0242ac11001b
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:38:34.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-ks6nz" for this suite.
Jul  7 12:38:56.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:38:56.749: INFO: namespace: e2e-tests-configmap-ks6nz, resource: bindings, ignored listing per whitelist
Jul  7 12:38:56.802: INFO: namespace e2e-tests-configmap-ks6nz deletion completed in 22.09819212s

• [SLOW TEST:28.272 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:38:56.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  7 12:38:56.908: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:38:58.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-jj8rc" for this suite.
Jul  7 12:39:04.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:39:04.227: INFO: namespace: e2e-tests-custom-resource-definition-jj8rc, resource: bindings, ignored listing per whitelist
Jul  7 12:39:04.242: INFO: namespace e2e-tests-custom-resource-definition-jj8rc deletion completed in 6.119983709s

• [SLOW TEST:7.440 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:39:04.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jul  7 12:39:04.428: INFO: Waiting up to 5m0s for pod "client-containers-d7880dcd-c04e-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-containers-swt44" to be "success or failure"
Jul  7 12:39:04.431: INFO: Pod "client-containers-d7880dcd-c04e-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.572974ms
Jul  7 12:39:06.435: INFO: Pod "client-containers-d7880dcd-c04e-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007374418s
Jul  7 12:39:08.468: INFO: Pod "client-containers-d7880dcd-c04e-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0400244s
STEP: Saw pod success
Jul  7 12:39:08.468: INFO: Pod "client-containers-d7880dcd-c04e-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 12:39:08.471: INFO: Trying to get logs from node hunter-worker pod client-containers-d7880dcd-c04e-11ea-9ad7-0242ac11001b container test-container: 
STEP: delete the pod
Jul  7 12:39:08.500: INFO: Waiting for pod client-containers-d7880dcd-c04e-11ea-9ad7-0242ac11001b to disappear
Jul  7 12:39:08.546: INFO: Pod client-containers-d7880dcd-c04e-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:39:08.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-swt44" for this suite.
Jul  7 12:39:14.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:39:15.009: INFO: namespace: e2e-tests-containers-swt44, resource: bindings, ignored listing per whitelist
Jul  7 12:39:15.030: INFO: namespace e2e-tests-containers-swt44 deletion completed in 6.480786937s

• [SLOW TEST:10.788 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:39:15.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-dded9d31-c04e-11ea-9ad7-0242ac11001b
STEP: Creating a pod to test consume configMaps
Jul  7 12:39:15.166: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ddee4db5-c04e-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-projected-8tndk" to be "success or failure"
Jul  7 12:39:15.188: INFO: Pod "pod-projected-configmaps-ddee4db5-c04e-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.932092ms
Jul  7 12:39:17.191: INFO: Pod "pod-projected-configmaps-ddee4db5-c04e-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025371659s
Jul  7 12:39:19.194: INFO: Pod "pod-projected-configmaps-ddee4db5-c04e-11ea-9ad7-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.028516905s
Jul  7 12:39:21.198: INFO: Pod "pod-projected-configmaps-ddee4db5-c04e-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032492083s
STEP: Saw pod success
Jul  7 12:39:21.198: INFO: Pod "pod-projected-configmaps-ddee4db5-c04e-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 12:39:21.201: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-ddee4db5-c04e-11ea-9ad7-0242ac11001b container projected-configmap-volume-test: 
STEP: delete the pod
Jul  7 12:39:21.226: INFO: Waiting for pod pod-projected-configmaps-ddee4db5-c04e-11ea-9ad7-0242ac11001b to disappear
Jul  7 12:39:21.236: INFO: Pod pod-projected-configmaps-ddee4db5-c04e-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:39:21.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8tndk" for this suite.
Jul  7 12:39:27.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:39:27.412: INFO: namespace: e2e-tests-projected-8tndk, resource: bindings, ignored listing per whitelist
Jul  7 12:39:27.442: INFO: namespace e2e-tests-projected-8tndk deletion completed in 6.202091446s

• [SLOW TEST:12.412 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:39:27.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-e557cb5c-c04e-11ea-9ad7-0242ac11001b
STEP: Creating a pod to test consume secrets
Jul  7 12:39:27.639: INFO: Waiting up to 5m0s for pod "pod-secrets-e55af6db-c04e-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-secrets-g4mdn" to be "success or failure"
Jul  7 12:39:27.642: INFO: Pod "pod-secrets-e55af6db-c04e-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.579225ms
Jul  7 12:39:29.738: INFO: Pod "pod-secrets-e55af6db-c04e-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099018169s
Jul  7 12:39:31.741: INFO: Pod "pod-secrets-e55af6db-c04e-11ea-9ad7-0242ac11001b": Phase="Running", Reason="", readiness=true. Elapsed: 4.102243981s
Jul  7 12:39:33.745: INFO: Pod "pod-secrets-e55af6db-c04e-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.105791748s
STEP: Saw pod success
Jul  7 12:39:33.745: INFO: Pod "pod-secrets-e55af6db-c04e-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 12:39:33.747: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-e55af6db-c04e-11ea-9ad7-0242ac11001b container secret-volume-test: 
STEP: delete the pod
Jul  7 12:39:33.810: INFO: Waiting for pod pod-secrets-e55af6db-c04e-11ea-9ad7-0242ac11001b to disappear
Jul  7 12:39:33.822: INFO: Pod pod-secrets-e55af6db-c04e-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:39:33.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-g4mdn" for this suite.
Jul  7 12:39:39.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:39:39.998: INFO: namespace: e2e-tests-secrets-g4mdn, resource: bindings, ignored listing per whitelist
Jul  7 12:39:40.015: INFO: namespace e2e-tests-secrets-g4mdn deletion completed in 6.189994967s

• [SLOW TEST:12.573 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:39:40.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jul  7 12:39:40.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-k6bn8 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jul  7 12:39:43.612: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0707 12:39:43.538571    3823 log.go:172] (0xc00072e370) (0xc00075c960) Create stream\nI0707 12:39:43.538637    3823 log.go:172] (0xc00072e370) (0xc00075c960) Stream added, broadcasting: 1\nI0707 12:39:43.541033    3823 log.go:172] (0xc00072e370) Reply frame received for 1\nI0707 12:39:43.541082    3823 log.go:172] (0xc00072e370) (0xc000982000) Create stream\nI0707 12:39:43.541099    3823 log.go:172] (0xc00072e370) (0xc000982000) Stream added, broadcasting: 3\nI0707 12:39:43.542216    3823 log.go:172] (0xc00072e370) Reply frame received for 3\nI0707 12:39:43.542265    3823 log.go:172] (0xc00072e370) (0xc00075ca00) Create stream\nI0707 12:39:43.542278    3823 log.go:172] (0xc00072e370) (0xc00075ca00) Stream added, broadcasting: 5\nI0707 12:39:43.543109    3823 log.go:172] (0xc00072e370) Reply frame received for 5\nI0707 12:39:43.543151    3823 log.go:172] (0xc00072e370) (0xc000828500) Create stream\nI0707 12:39:43.543165    3823 log.go:172] (0xc00072e370) (0xc000828500) Stream added, broadcasting: 7\nI0707 12:39:43.543968    3823 log.go:172] (0xc00072e370) Reply frame received for 7\nI0707 12:39:43.544193    3823 log.go:172] (0xc000982000) (3) Writing data frame\nI0707 12:39:43.544326    3823 log.go:172] (0xc000982000) (3) Writing data frame\nI0707 12:39:43.545391    3823 log.go:172] (0xc00072e370) Data frame received for 5\nI0707 12:39:43.545403    3823 log.go:172] (0xc00075ca00) (5) Data frame handling\nI0707 12:39:43.545410    3823 log.go:172] (0xc00075ca00) (5) Data frame sent\nI0707 12:39:43.546178    3823 log.go:172] (0xc00072e370) Data frame received for 5\nI0707 12:39:43.546230    3823 log.go:172] (0xc00075ca00) (5) Data frame handling\nI0707 12:39:43.546257    3823 log.go:172] (0xc00075ca00) (5) Data frame sent\nI0707 12:39:43.586648    3823 log.go:172] (0xc00072e370) Data frame received for 7\nI0707 12:39:43.586720    3823 log.go:172] (0xc000828500) (7) Data frame handling\nI0707 12:39:43.586837    3823 log.go:172] (0xc00072e370) Data frame received for 5\nI0707 12:39:43.586869    3823 log.go:172] (0xc00075ca00) (5) Data frame handling\nI0707 12:39:43.588296    3823 log.go:172] (0xc00072e370) Data frame received for 1\nI0707 12:39:43.588359    3823 log.go:172] (0xc00072e370) (0xc000982000) Stream removed, broadcasting: 3\nI0707 12:39:43.588424    3823 log.go:172] (0xc00075c960) (1) Data frame handling\nI0707 12:39:43.588463    3823 log.go:172] (0xc00075c960) (1) Data frame sent\nI0707 12:39:43.588488    3823 log.go:172] (0xc00072e370) (0xc00075c960) Stream removed, broadcasting: 1\nI0707 12:39:43.588622    3823 log.go:172] (0xc00072e370) (0xc00075c960) Stream removed, broadcasting: 1\nI0707 12:39:43.588783    3823 log.go:172] (0xc00072e370) (0xc000982000) Stream removed, broadcasting: 3\nI0707 12:39:43.588912    3823 log.go:172] (0xc00072e370) (0xc00075ca00) Stream removed, broadcasting: 5\nI0707 12:39:43.588958    3823 log.go:172] (0xc00072e370) (0xc000828500) Stream removed, broadcasting: 7\nI0707 12:39:43.589063    3823 log.go:172] (0xc00072e370) Go away received\n"
Jul  7 12:39:43.612: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:39:46.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-k6bn8" for this suite.
Jul  7 12:39:53.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:39:53.236: INFO: namespace: e2e-tests-kubectl-k6bn8, resource: bindings, ignored listing per whitelist
Jul  7 12:39:53.286: INFO: namespace e2e-tests-kubectl-k6bn8 deletion completed in 6.226076547s

• [SLOW TEST:13.270 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:39:53.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jul  7 12:39:53.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jul  7 12:39:53.559: INFO: stderr: ""
Jul  7 12:39:53.559: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32775\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32775/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:39:53.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2zr6s" for this suite.
Jul  7 12:39:59.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:39:59.690: INFO: namespace: e2e-tests-kubectl-2zr6s, resource: bindings, ignored listing per whitelist
Jul  7 12:39:59.702: INFO: namespace e2e-tests-kubectl-2zr6s deletion completed in 6.082108945s

• [SLOW TEST:6.416 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:39:59.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul  7 12:39:59.852: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jul  7 12:39:59.871: INFO: Number of nodes with available pods: 0
Jul  7 12:39:59.871: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jul  7 12:39:59.989: INFO: Number of nodes with available pods: 0
Jul  7 12:39:59.989: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 12:40:01.027: INFO: Number of nodes with available pods: 0
Jul  7 12:40:01.027: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 12:40:02.080: INFO: Number of nodes with available pods: 0
Jul  7 12:40:02.080: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 12:40:02.992: INFO: Number of nodes with available pods: 0
Jul  7 12:40:02.992: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 12:40:03.993: INFO: Number of nodes with available pods: 1
Jul  7 12:40:03.993: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jul  7 12:40:04.031: INFO: Number of nodes with available pods: 1
Jul  7 12:40:04.031: INFO: Number of running nodes: 0, number of available pods: 1
Jul  7 12:40:05.036: INFO: Number of nodes with available pods: 0
Jul  7 12:40:05.036: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jul  7 12:40:05.047: INFO: Number of nodes with available pods: 0
Jul  7 12:40:05.047: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 12:40:06.050: INFO: Number of nodes with available pods: 0
Jul  7 12:40:06.050: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 12:40:07.050: INFO: Number of nodes with available pods: 0
Jul  7 12:40:07.050: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 12:40:08.051: INFO: Number of nodes with available pods: 0
Jul  7 12:40:08.051: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 12:40:09.051: INFO: Number of nodes with available pods: 0
Jul  7 12:40:09.051: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 12:40:10.050: INFO: Number of nodes with available pods: 0
Jul  7 12:40:10.050: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 12:40:11.051: INFO: Number of nodes with available pods: 0
Jul  7 12:40:11.051: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 12:40:12.050: INFO: Number of nodes with available pods: 0
Jul  7 12:40:12.050: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 12:40:13.050: INFO: Number of nodes with available pods: 0
Jul  7 12:40:13.050: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 12:40:14.051: INFO: Number of nodes with available pods: 0
Jul  7 12:40:14.051: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 12:40:15.074: INFO: Number of nodes with available pods: 0
Jul  7 12:40:15.074: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 12:40:16.050: INFO: Number of nodes with available pods: 0
Jul  7 12:40:16.050: INFO: Node hunter-worker is running more than one daemon pod
Jul  7 12:40:17.049: INFO: Number of nodes with available pods: 1
Jul  7 12:40:17.050: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-chpp5, will wait for the garbage collector to delete the pods
Jul  7 12:40:17.112: INFO: Deleting DaemonSet.extensions daemon-set took: 5.7552ms
Jul  7 12:40:17.212: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.155648ms
Jul  7 12:40:23.815: INFO: Number of nodes with available pods: 0
Jul  7 12:40:23.815: INFO: Number of running nodes: 0, number of available pods: 0
Jul  7 12:40:23.818: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-chpp5/daemonsets","resourceVersion":"612826"},"items":null}

Jul  7 12:40:23.820: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-chpp5/pods","resourceVersion":"612826"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:40:23.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-chpp5" for this suite.
Jul  7 12:40:29.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:40:29.950: INFO: namespace: e2e-tests-daemonsets-chpp5, resource: bindings, ignored listing per whitelist
Jul  7 12:40:29.953: INFO: namespace e2e-tests-daemonsets-chpp5 deletion completed in 6.07952731s

• [SLOW TEST:30.251 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:40:29.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul  7 12:40:30.133: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0a9bf0d5-c04f-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-downward-api-scgnm" to be "success or failure"
Jul  7 12:40:30.137: INFO: Pod "downwardapi-volume-0a9bf0d5-c04f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093879ms
Jul  7 12:40:32.141: INFO: Pod "downwardapi-volume-0a9bf0d5-c04f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008459489s
Jul  7 12:40:34.145: INFO: Pod "downwardapi-volume-0a9bf0d5-c04f-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012140377s
STEP: Saw pod success
Jul  7 12:40:34.145: INFO: Pod "downwardapi-volume-0a9bf0d5-c04f-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 12:40:34.147: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-0a9bf0d5-c04f-11ea-9ad7-0242ac11001b container client-container: 
STEP: delete the pod
Jul  7 12:40:34.291: INFO: Waiting for pod downwardapi-volume-0a9bf0d5-c04f-11ea-9ad7-0242ac11001b to disappear
Jul  7 12:40:34.305: INFO: Pod downwardapi-volume-0a9bf0d5-c04f-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:40:34.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-scgnm" for this suite.
Jul  7 12:40:42.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:40:42.400: INFO: namespace: e2e-tests-downward-api-scgnm, resource: bindings, ignored listing per whitelist
Jul  7 12:40:42.475: INFO: namespace e2e-tests-downward-api-scgnm deletion completed in 8.166444431s

• [SLOW TEST:12.521 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:40:42.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jul  7 12:40:42.613: INFO: Waiting up to 5m0s for pod "downward-api-12063bc2-c04f-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-downward-api-krcrw" to be "success or failure"
Jul  7 12:40:42.627: INFO: Pod "downward-api-12063bc2-c04f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.134167ms
Jul  7 12:40:44.741: INFO: Pod "downward-api-12063bc2-c04f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128183958s
Jul  7 12:40:46.745: INFO: Pod "downward-api-12063bc2-c04f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131932587s
Jul  7 12:40:48.749: INFO: Pod "downward-api-12063bc2-c04f-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.13595148s
STEP: Saw pod success
Jul  7 12:40:48.749: INFO: Pod "downward-api-12063bc2-c04f-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 12:40:48.751: INFO: Trying to get logs from node hunter-worker2 pod downward-api-12063bc2-c04f-11ea-9ad7-0242ac11001b container dapi-container: 
STEP: delete the pod
Jul  7 12:40:48.765: INFO: Waiting for pod downward-api-12063bc2-c04f-11ea-9ad7-0242ac11001b to disappear
Jul  7 12:40:48.770: INFO: Pod downward-api-12063bc2-c04f-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:40:48.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-krcrw" for this suite.
Jul  7 12:40:54.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:40:54.938: INFO: namespace: e2e-tests-downward-api-krcrw, resource: bindings, ignored listing per whitelist
Jul  7 12:40:54.980: INFO: namespace e2e-tests-downward-api-krcrw deletion completed in 6.206770679s

• [SLOW TEST:12.505 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul  7 12:40:54.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-82j2f in namespace e2e-tests-proxy-wknjq
I0707 12:40:55.288476       6 runners.go:184] Created replication controller with name: proxy-service-82j2f, namespace: e2e-tests-proxy-wknjq, replica count: 1
I0707 12:40:56.338948       6 runners.go:184] proxy-service-82j2f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 12:40:57.339199       6 runners.go:184] proxy-service-82j2f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 12:40:58.339415       6 runners.go:184] proxy-service-82j2f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0707 12:40:59.339658       6 runners.go:184] proxy-service-82j2f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0707 12:41:00.339899       6 runners.go:184] proxy-service-82j2f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0707 12:41:01.340182       6 runners.go:184] proxy-service-82j2f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0707 12:41:02.340372       6 runners.go:184] proxy-service-82j2f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0707 12:41:03.340608       6 runners.go:184] proxy-service-82j2f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0707 12:41:04.340807       6 runners.go:184] proxy-service-82j2f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0707 12:41:05.341071       6 runners.go:184] proxy-service-82j2f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0707 12:41:06.341694       6 runners.go:184] proxy-service-82j2f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0707 12:41:07.341926       6 runners.go:184] proxy-service-82j2f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0707 12:41:08.342134       6 runners.go:184] proxy-service-82j2f Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  7 12:41:08.346: INFO: setup took 13.113833486s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jul  7 12:41:08.354: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-wknjq/services/http:proxy-service-82j2f:portname2/proxy/: bar (200; 8.064666ms)
Jul  7 12:41:08.354: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-wknjq/pods/proxy-service-82j2f-p5q9d/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-2e87abb3-c04f-11ea-9ad7-0242ac11001b
STEP: Creating a pod to test consume configMaps
Jul  7 12:41:30.399: INFO: Waiting up to 5m0s for pod "pod-configmaps-2e881944-c04f-11ea-9ad7-0242ac11001b" in namespace "e2e-tests-configmap-ddghq" to be "success or failure"
Jul  7 12:41:30.403: INFO: Pod "pod-configmaps-2e881944-c04f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.479717ms
Jul  7 12:41:32.776: INFO: Pod "pod-configmaps-2e881944-c04f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376813682s
Jul  7 12:41:34.779: INFO: Pod "pod-configmaps-2e881944-c04f-11ea-9ad7-0242ac11001b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.379955665s
Jul  7 12:41:36.783: INFO: Pod "pod-configmaps-2e881944-c04f-11ea-9ad7-0242ac11001b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.383837978s
STEP: Saw pod success
Jul  7 12:41:36.783: INFO: Pod "pod-configmaps-2e881944-c04f-11ea-9ad7-0242ac11001b" satisfied condition "success or failure"
Jul  7 12:41:36.785: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-2e881944-c04f-11ea-9ad7-0242ac11001b container configmap-volume-test: 
STEP: delete the pod
Jul  7 12:41:36.860: INFO: Waiting for pod pod-configmaps-2e881944-c04f-11ea-9ad7-0242ac11001b to disappear
Jul  7 12:41:36.891: INFO: Pod pod-configmaps-2e881944-c04f-11ea-9ad7-0242ac11001b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul  7 12:41:36.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-ddghq" for this suite.
Jul  7 12:41:43.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  7 12:41:43.171: INFO: namespace: e2e-tests-configmap-ddghq, resource: bindings, ignored listing per whitelist
Jul  7 12:41:43.203: INFO: namespace e2e-tests-configmap-ddghq deletion completed in 6.307806175s

• [SLOW TEST:13.399 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSJul  7 12:41:43.203: INFO: Running AfterSuite actions on all nodes
Jul  7 12:41:43.203: INFO: Running AfterSuite actions on node 1
Jul  7 12:41:43.203: INFO: Skipping dumping logs from cluster

Ran 200 of 2164 Specs in 6877.560 seconds
SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS