I0214 10:47:14.486839 8 e2e.go:224] Starting e2e run "5c2328e8-4f17-11ea-af88-0242ac110007" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581677233 - Will randomize all specs Will run 201 of 2164 specs Feb 14 10:47:15.165: INFO: >>> kubeConfig: /root/.kube/config Feb 14 10:47:15.172: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 14 10:47:15.199: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 14 10:47:15.240: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 14 10:47:15.240: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 14 10:47:15.240: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 14 10:47:15.248: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 14 10:47:15.248: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 14 10:47:15.248: INFO: e2e test version: v1.13.12 Feb 14 10:47:15.249: INFO: kube-apiserver version: v1.13.8 SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 14 10:47:15.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Feb 14 10:47:15.414: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 14 10:47:15.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-r48p2' Feb 14 10:47:17.227: INFO: stderr: "" Feb 14 10:47:17.227: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Feb 14 10:47:27.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-r48p2 -o json' Feb 14 10:47:27.483: INFO: stderr: "" Feb 14 10:47:27.483: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-02-14T10:47:17Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-r48p2\",\n \"resourceVersion\": \"21630889\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-r48p2/pods/e2e-test-nginx-pod\",\n \"uid\": \"5e352b44-4f17-11ea-a994-fa163e34d433\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-fxhnq\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-fxhnq\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-fxhnq\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-14T10:47:17Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-14T10:47:26Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-14T10:47:26Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-02-14T10:47:17Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://da27ab209dc957fd93eb6de2d64973184faa0af76804270cbcc021ef41a90c75\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-02-14T10:47:25Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.96.1.240\",\n \"phase\": \"Running\",\n \"podIP\": \"10.32.0.4\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-02-14T10:47:17Z\"\n }\n}\n" STEP: replace the image in the pod Feb 14 10:47:27.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-r48p2' Feb 14 10:47:28.007: INFO: stderr: "" Feb 14 10:47:28.007: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Feb 14 10:47:28.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-r48p2' Feb 14 10:47:37.163: INFO: stderr: "" Feb 14 10:47:37.163: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 14 10:47:37.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-r48p2" for this suite. Feb 14 10:47:43.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 14 10:47:43.315: INFO: namespace: e2e-tests-kubectl-r48p2, resource: bindings, ignored listing per whitelist Feb 14 10:47:43.404: INFO: namespace e2e-tests-kubectl-r48p2 deletion completed in 6.231235682s • [SLOW TEST:28.154 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 14 10:47:43.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-6dfe6f93-4f17-11ea-af88-0242ac110007 STEP: Creating a pod to test consume secrets Feb 14 10:47:43.791: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6e0e460d-4f17-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-8znxz" to be "success or failure" Feb 14 10:47:43.802: INFO: Pod "pod-projected-secrets-6e0e460d-4f17-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.300444ms Feb 14 10:47:46.012: INFO: Pod "pod-projected-secrets-6e0e460d-4f17-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221097629s Feb 14 10:47:48.026: INFO: Pod "pod-projected-secrets-6e0e460d-4f17-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234976385s Feb 14 10:47:50.081: INFO: Pod "pod-projected-secrets-6e0e460d-4f17-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.289609375s Feb 14 10:47:52.107: INFO: Pod "pod-projected-secrets-6e0e460d-4f17-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.316217674s Feb 14 10:47:54.227: INFO: Pod "pod-projected-secrets-6e0e460d-4f17-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.435709043s STEP: Saw pod success Feb 14 10:47:54.227: INFO: Pod "pod-projected-secrets-6e0e460d-4f17-11ea-af88-0242ac110007" satisfied condition "success or failure" Feb 14 10:47:54.246: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-6e0e460d-4f17-11ea-af88-0242ac110007 container projected-secret-volume-test: STEP: delete the pod Feb 14 10:47:54.484: INFO: Waiting for pod pod-projected-secrets-6e0e460d-4f17-11ea-af88-0242ac110007 to disappear Feb 14 10:47:54.502: INFO: Pod pod-projected-secrets-6e0e460d-4f17-11ea-af88-0242ac110007 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 14 10:47:54.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-8znxz" for this suite. Feb 14 10:48:00.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 14 10:48:00.639: INFO: namespace: e2e-tests-projected-8znxz, resource: bindings, ignored listing per whitelist Feb 14 10:48:00.689: INFO: namespace e2e-tests-projected-8znxz deletion completed in 6.175948458s • [SLOW TEST:17.285 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 14 10:48:00.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-sxfrw STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 14 10:48:00.865: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 14 10:48:31.257: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-sxfrw PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 14 10:48:31.257: INFO: >>> kubeConfig: /root/.kube/config I0214 10:48:31.315916 8 log.go:172] (0xc001a94420) (0xc0019f5860) Create stream I0214 10:48:31.316097 8 log.go:172] (0xc001a94420) (0xc0019f5860) Stream added, broadcasting: 1 I0214 10:48:31.322919 8 log.go:172] (0xc001a94420) Reply frame received for 1 I0214 10:48:31.322974 8 log.go:172] (0xc001a94420) (0xc0017caa00) Create stream I0214 10:48:31.322984 8 log.go:172] (0xc001a94420) (0xc0017caa00) Stream added, broadcasting: 3 I0214 10:48:31.323808 8 log.go:172] (0xc001a94420) Reply frame received for 3 I0214 10:48:31.323834 8 log.go:172] (0xc001a94420) (0xc001889cc0) Create stream I0214 10:48:31.323846 8 log.go:172] (0xc001a94420) (0xc001889cc0) Stream added, broadcasting: 5 I0214 10:48:31.324804 8 log.go:172] (0xc001a94420) Reply frame received for 5 I0214 10:48:32.478015 8 log.go:172] (0xc001a94420) Data frame received for 3 I0214 10:48:32.478114 8 log.go:172] (0xc0017caa00) (3) Data frame handling I0214 10:48:32.478158 8 log.go:172] (0xc0017caa00) (3) Data frame sent I0214 10:48:32.745853 8 log.go:172] (0xc001a94420) (0xc0017caa00) Stream removed, broadcasting: 3 I0214 10:48:32.746181 8 log.go:172] (0xc001a94420) Data frame received for 1 I0214 10:48:32.746204 8 log.go:172] (0xc0019f5860) (1) Data frame handling I0214 10:48:32.746247 8 log.go:172] (0xc0019f5860) (1) Data frame sent I0214 10:48:32.746267 8 log.go:172] (0xc001a94420) (0xc001889cc0) Stream removed, broadcasting: 5 I0214 10:48:32.746343 8 log.go:172] (0xc001a94420) (0xc0019f5860) Stream removed, broadcasting: 1 I0214 10:48:32.746376 8 log.go:172] (0xc001a94420) Go away received I0214 10:48:32.747115 8 log.go:172] (0xc001a94420) (0xc0019f5860) Stream removed, broadcasting: 1 I0214 10:48:32.747201 8 log.go:172] (0xc001a94420) (0xc0017caa00) Stream removed, broadcasting: 3 I0214 10:48:32.747213 8 log.go:172] (0xc001a94420) (0xc001889cc0) Stream removed, broadcasting: 5 Feb 14 10:48:32.747: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 14 10:48:32.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-sxfrw" for this suite. Feb 14 10:48:57.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 14 10:48:57.232: INFO: namespace: e2e-tests-pod-network-test-sxfrw, resource: bindings, ignored listing per whitelist Feb 14 10:48:57.374: INFO: namespace e2e-tests-pod-network-test-sxfrw deletion completed in 24.600882064s • [SLOW TEST:56.683 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 14 10:48:57.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Feb 14 10:49:08.236: INFO: Successfully updated pod "labelsupdate9a106dd7-4f17-11ea-af88-0242ac110007" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Feb 14 10:49:10.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-4mf6w" for this suite. Feb 14 10:49:34.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 14 10:49:34.432: INFO: namespace: e2e-tests-downward-api-4mf6w, resource: bindings, ignored listing per whitelist Feb 14 10:49:34.566: INFO: namespace e2e-tests-downward-api-4mf6w deletion completed in 24.220527814s • [SLOW TEST:37.192 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Feb 14 10:49:34.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Feb 14 10:49:34.865: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 18.539836ms)
Feb 14 10:49:34.871: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.227204ms)
Feb 14 10:49:34.877: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.951377ms)
Feb 14 10:49:34.882: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.285372ms)
Feb 14 10:49:34.886: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.306611ms)
Feb 14 10:49:34.893: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.672481ms)
Feb 14 10:49:34.898: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.896662ms)
Feb 14 10:49:34.901: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.446847ms)
Feb 14 10:49:34.906: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.548564ms)
Feb 14 10:49:34.909: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.410273ms)
Feb 14 10:49:34.912: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.852083ms)
Feb 14 10:49:34.916: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.294412ms)
Feb 14 10:49:34.919: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.702578ms)
Feb 14 10:49:34.924: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.18268ms)
Feb 14 10:49:34.927: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.931792ms)
Feb 14 10:49:34.931: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.669349ms)
Feb 14 10:49:34.935: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.908002ms)
Feb 14 10:49:34.939: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.31862ms)
Feb 14 10:49:34.945: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.786857ms)
Feb 14 10:49:34.950: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.101108ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 10:49:34.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-wqqdk" for this suite.
Feb 14 10:49:41.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 10:49:41.049: INFO: namespace: e2e-tests-proxy-wqqdk, resource: bindings, ignored listing per whitelist
Feb 14 10:49:41.152: INFO: namespace e2e-tests-proxy-wqqdk deletion completed in 6.198134354s

• [SLOW TEST:6.585 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 10:49:41.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 14 10:49:41.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-97976'
Feb 14 10:49:41.705: INFO: stderr: ""
Feb 14 10:49:41.705: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Feb 14 10:49:41.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-97976'
Feb 14 10:49:52.530: INFO: stderr: ""
Feb 14 10:49:52.530: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 10:49:52.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-97976" for this suite.
Feb 14 10:49:58.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 10:49:58.639: INFO: namespace: e2e-tests-kubectl-97976, resource: bindings, ignored listing per whitelist
Feb 14 10:49:58.888: INFO: namespace e2e-tests-kubectl-97976 deletion completed in 6.333610238s

• [SLOW TEST:17.735 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 10:49:58.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-sx2xk/configmap-test-beba738f-4f17-11ea-af88-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 14 10:49:59.117: INFO: Waiting up to 5m0s for pod "pod-configmaps-bebb4727-4f17-11ea-af88-0242ac110007" in namespace "e2e-tests-configmap-sx2xk" to be "success or failure"
Feb 14 10:49:59.123: INFO: Pod "pod-configmaps-bebb4727-4f17-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.451321ms
Feb 14 10:50:01.137: INFO: Pod "pod-configmaps-bebb4727-4f17-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019602886s
Feb 14 10:50:03.177: INFO: Pod "pod-configmaps-bebb4727-4f17-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059627088s
Feb 14 10:50:05.907: INFO: Pod "pod-configmaps-bebb4727-4f17-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.789671982s
Feb 14 10:50:08.097: INFO: Pod "pod-configmaps-bebb4727-4f17-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.980111558s
Feb 14 10:50:10.113: INFO: Pod "pod-configmaps-bebb4727-4f17-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.996364134s
STEP: Saw pod success
Feb 14 10:50:10.113: INFO: Pod "pod-configmaps-bebb4727-4f17-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 10:50:10.124: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-bebb4727-4f17-11ea-af88-0242ac110007 container env-test: 
STEP: delete the pod
Feb 14 10:50:10.397: INFO: Waiting for pod pod-configmaps-bebb4727-4f17-11ea-af88-0242ac110007 to disappear
Feb 14 10:50:10.401: INFO: Pod pod-configmaps-bebb4727-4f17-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 10:50:10.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-sx2xk" for this suite.
Feb 14 10:50:16.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 10:50:16.619: INFO: namespace: e2e-tests-configmap-sx2xk, resource: bindings, ignored listing per whitelist
Feb 14 10:50:16.702: INFO: namespace e2e-tests-configmap-sx2xk deletion completed in 6.296506819s

• [SLOW TEST:17.813 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 10:50:16.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 14 10:50:48.925: INFO: Container started at 2020-02-14 10:50:24 +0000 UTC, pod became ready at 2020-02-14 10:50:48 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 10:50:48.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-x5cpw" for this suite.
Feb 14 10:51:13.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 10:51:13.094: INFO: namespace: e2e-tests-container-probe-x5cpw, resource: bindings, ignored listing per whitelist
Feb 14 10:51:13.207: INFO: namespace e2e-tests-container-probe-x5cpw deletion completed in 24.262682131s

• [SLOW TEST:56.504 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 10:51:13.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-6tx8g
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 14 10:51:13.436: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 14 10:51:45.678: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-6tx8g PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 10:51:45.678: INFO: >>> kubeConfig: /root/.kube/config
I0214 10:51:45.787819       8 log.go:172] (0xc001a94580) (0xc001212c80) Create stream
I0214 10:51:45.787990       8 log.go:172] (0xc001a94580) (0xc001212c80) Stream added, broadcasting: 1
I0214 10:51:45.797758       8 log.go:172] (0xc001a94580) Reply frame received for 1
I0214 10:51:45.797844       8 log.go:172] (0xc001a94580) (0xc000b768c0) Create stream
I0214 10:51:45.797869       8 log.go:172] (0xc001a94580) (0xc000b768c0) Stream added, broadcasting: 3
I0214 10:51:45.799807       8 log.go:172] (0xc001a94580) Reply frame received for 3
I0214 10:51:45.799916       8 log.go:172] (0xc001a94580) (0xc001212d20) Create stream
I0214 10:51:45.799936       8 log.go:172] (0xc001a94580) (0xc001212d20) Stream added, broadcasting: 5
I0214 10:51:45.801405       8 log.go:172] (0xc001a94580) Reply frame received for 5
I0214 10:51:45.998680       8 log.go:172] (0xc001a94580) Data frame received for 3
I0214 10:51:45.998826       8 log.go:172] (0xc000b768c0) (3) Data frame handling
I0214 10:51:45.998895       8 log.go:172] (0xc000b768c0) (3) Data frame sent
I0214 10:51:46.157342       8 log.go:172] (0xc001a94580) Data frame received for 1
I0214 10:51:46.157546       8 log.go:172] (0xc001212c80) (1) Data frame handling
I0214 10:51:46.157628       8 log.go:172] (0xc001212c80) (1) Data frame sent
I0214 10:51:46.158180       8 log.go:172] (0xc001a94580) (0xc000b768c0) Stream removed, broadcasting: 3
I0214 10:51:46.158501       8 log.go:172] (0xc001a94580) (0xc001212c80) Stream removed, broadcasting: 1
I0214 10:51:46.159126       8 log.go:172] (0xc001a94580) (0xc001212d20) Stream removed, broadcasting: 5
I0214 10:51:46.159238       8 log.go:172] (0xc001a94580) (0xc001212c80) Stream removed, broadcasting: 1
I0214 10:51:46.159249       8 log.go:172] (0xc001a94580) (0xc000b768c0) Stream removed, broadcasting: 3
I0214 10:51:46.159259       8 log.go:172] (0xc001a94580) (0xc001212d20) Stream removed, broadcasting: 5
Feb 14 10:51:46.159: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 10:51:46.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0214 10:51:46.163019       8 log.go:172] (0xc001a94580) Go away received
STEP: Destroying namespace "e2e-tests-pod-network-test-6tx8g" for this suite.
Feb 14 10:52:10.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 10:52:10.325: INFO: namespace: e2e-tests-pod-network-test-6tx8g, resource: bindings, ignored listing per whitelist
Feb 14 10:52:10.545: INFO: namespace e2e-tests-pod-network-test-6tx8g deletion completed in 24.36058102s

• [SLOW TEST:57.338 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 10:52:10.546: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-0d3b2344-4f18-11ea-af88-0242ac110007
STEP: Creating secret with name s-test-opt-upd-0d3b2407-4f18-11ea-af88-0242ac110007
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-0d3b2344-4f18-11ea-af88-0242ac110007
STEP: Updating secret s-test-opt-upd-0d3b2407-4f18-11ea-af88-0242ac110007
STEP: Creating secret with name s-test-opt-create-0d3b243e-4f18-11ea-af88-0242ac110007
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 10:52:29.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-fxkr7" for this suite.
Feb 14 10:52:53.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 10:52:53.395: INFO: namespace: e2e-tests-projected-fxkr7, resource: bindings, ignored listing per whitelist
Feb 14 10:52:53.402: INFO: namespace e2e-tests-projected-fxkr7 deletion completed in 24.216402285s

• [SLOW TEST:42.856 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 10:52:53.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-26baa6b9-4f18-11ea-af88-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 14 10:52:53.636: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-26bdf113-4f18-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-t22n2" to be "success or failure"
Feb 14 10:52:53.706: INFO: Pod "pod-projected-configmaps-26bdf113-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 69.491495ms
Feb 14 10:52:55.893: INFO: Pod "pod-projected-configmaps-26bdf113-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.256885817s
Feb 14 10:52:57.910: INFO: Pod "pod-projected-configmaps-26bdf113-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.273842685s
Feb 14 10:53:00.177: INFO: Pod "pod-projected-configmaps-26bdf113-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.540516189s
Feb 14 10:53:02.212: INFO: Pod "pod-projected-configmaps-26bdf113-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.576059769s
Feb 14 10:53:04.231: INFO: Pod "pod-projected-configmaps-26bdf113-4f18-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.594924326s
STEP: Saw pod success
Feb 14 10:53:04.231: INFO: Pod "pod-projected-configmaps-26bdf113-4f18-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 10:53:04.236: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-26bdf113-4f18-11ea-af88-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 14 10:53:05.013: INFO: Waiting for pod pod-projected-configmaps-26bdf113-4f18-11ea-af88-0242ac110007 to disappear
Feb 14 10:53:05.027: INFO: Pod pod-projected-configmaps-26bdf113-4f18-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 10:53:05.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-t22n2" for this suite.
Feb 14 10:53:11.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 10:53:11.168: INFO: namespace: e2e-tests-projected-t22n2, resource: bindings, ignored listing per whitelist
Feb 14 10:53:11.219: INFO: namespace e2e-tests-projected-t22n2 deletion completed in 6.185603694s

• [SLOW TEST:17.817 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 10:53:11.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 14 10:53:11.517: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Feb 14 10:53:11.525: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6nvvs/daemonsets","resourceVersion":"21631623"},"items":null}

Feb 14 10:53:11.528: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6nvvs/pods","resourceVersion":"21631623"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 10:53:11.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-6nvvs" for this suite.
Feb 14 10:53:17.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 10:53:17.665: INFO: namespace: e2e-tests-daemonsets-6nvvs, resource: bindings, ignored listing per whitelist
Feb 14 10:53:17.744: INFO: namespace e2e-tests-daemonsets-6nvvs deletion completed in 6.204990358s

S [SKIPPING] [6.525 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Feb 14 10:53:11.517: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 10:53:17.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-354bc253-4f18-11ea-af88-0242ac110007
STEP: Creating configMap with name cm-test-opt-upd-354bc2fa-4f18-11ea-af88-0242ac110007
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-354bc253-4f18-11ea-af88-0242ac110007
STEP: Updating configmap cm-test-opt-upd-354bc2fa-4f18-11ea-af88-0242ac110007
STEP: Creating configMap with name cm-test-opt-create-354bc33e-4f18-11ea-af88-0242ac110007
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 10:55:00.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-f4brl" for this suite.
Feb 14 10:55:26.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 10:55:26.999: INFO: namespace: e2e-tests-configmap-f4brl, resource: bindings, ignored listing per whitelist
Feb 14 10:55:27.041: INFO: namespace e2e-tests-configmap-f4brl deletion completed in 26.247174002s

• [SLOW TEST:129.297 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 10:55:27.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Feb 14 10:55:27.372: INFO: Waiting up to 5m0s for pod "var-expansion-8262fa75-4f18-11ea-af88-0242ac110007" in namespace "e2e-tests-var-expansion-dfcz7" to be "success or failure"
Feb 14 10:55:27.389: INFO: Pod "var-expansion-8262fa75-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 16.860592ms
Feb 14 10:55:29.423: INFO: Pod "var-expansion-8262fa75-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050506213s
Feb 14 10:55:31.435: INFO: Pod "var-expansion-8262fa75-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06238275s
Feb 14 10:55:33.527: INFO: Pod "var-expansion-8262fa75-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154128116s
Feb 14 10:55:35.535: INFO: Pod "var-expansion-8262fa75-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.162328475s
Feb 14 10:55:37.554: INFO: Pod "var-expansion-8262fa75-4f18-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.181115885s
STEP: Saw pod success
Feb 14 10:55:37.554: INFO: Pod "var-expansion-8262fa75-4f18-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 10:55:37.564: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-8262fa75-4f18-11ea-af88-0242ac110007 container dapi-container: 
STEP: delete the pod
Feb 14 10:55:38.726: INFO: Waiting for pod var-expansion-8262fa75-4f18-11ea-af88-0242ac110007 to disappear
Feb 14 10:55:38.890: INFO: Pod var-expansion-8262fa75-4f18-11ea-af88-0242ac110007 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 10:55:38.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-dfcz7" for this suite.
Feb 14 10:55:44.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 10:55:45.032: INFO: namespace: e2e-tests-var-expansion-dfcz7, resource: bindings, ignored listing per whitelist
Feb 14 10:55:45.174: INFO: namespace e2e-tests-var-expansion-dfcz7 deletion completed in 6.270384643s

• [SLOW TEST:18.132 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 10:55:45.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-9w8d8/configmap-test-8d1e4496-4f18-11ea-af88-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 14 10:55:45.378: INFO: Waiting up to 5m0s for pod "pod-configmaps-8d1fb4be-4f18-11ea-af88-0242ac110007" in namespace "e2e-tests-configmap-9w8d8" to be "success or failure"
Feb 14 10:55:45.388: INFO: Pod "pod-configmaps-8d1fb4be-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.279689ms
Feb 14 10:55:47.406: INFO: Pod "pod-configmaps-8d1fb4be-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027883682s
Feb 14 10:55:49.418: INFO: Pod "pod-configmaps-8d1fb4be-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039908001s
Feb 14 10:55:51.459: INFO: Pod "pod-configmaps-8d1fb4be-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080897813s
Feb 14 10:55:53.806: INFO: Pod "pod-configmaps-8d1fb4be-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.427704145s
Feb 14 10:55:55.825: INFO: Pod "pod-configmaps-8d1fb4be-4f18-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.446476356s
STEP: Saw pod success
Feb 14 10:55:55.825: INFO: Pod "pod-configmaps-8d1fb4be-4f18-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 10:55:55.833: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-8d1fb4be-4f18-11ea-af88-0242ac110007 container env-test: 
STEP: delete the pod
Feb 14 10:55:56.027: INFO: Waiting for pod pod-configmaps-8d1fb4be-4f18-11ea-af88-0242ac110007 to disappear
Feb 14 10:55:56.051: INFO: Pod pod-configmaps-8d1fb4be-4f18-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 10:55:56.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9w8d8" for this suite.
Feb 14 10:56:02.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 10:56:02.276: INFO: namespace: e2e-tests-configmap-9w8d8, resource: bindings, ignored listing per whitelist
Feb 14 10:56:02.314: INFO: namespace e2e-tests-configmap-9w8d8 deletion completed in 6.256989442s

• [SLOW TEST:17.141 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 10:56:02.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 14 10:56:02.601: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 14 10:56:02.640: INFO: Waiting for terminating namespaces to be deleted...
Feb 14 10:56:02.655: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 14 10:56:02.680: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 14 10:56:02.681: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 14 10:56:02.681: INFO: 	Container coredns ready: true, restart count 0
Feb 14 10:56:02.681: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 14 10:56:02.681: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 14 10:56:02.681: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 14 10:56:02.681: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 14 10:56:02.681: INFO: 	Container weave ready: true, restart count 0
Feb 14 10:56:02.681: INFO: 	Container weave-npc ready: true, restart count 0
Feb 14 10:56:02.681: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 14 10:56:02.681: INFO: 	Container coredns ready: true, restart count 0
Feb 14 10:56:02.681: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 14 10:56:02.681: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Feb 14 10:56:02.795: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 14 10:56:02.795: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 14 10:56:02.795: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb 14 10:56:02.795: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Feb 14 10:56:02.795: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Feb 14 10:56:02.795: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Feb 14 10:56:02.795: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Feb 14 10:56:02.795: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-9782b318-4f18-11ea-af88-0242ac110007.15f33f8b92740cad], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-zpfzs/filler-pod-9782b318-4f18-11ea-af88-0242ac110007 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-9782b318-4f18-11ea-af88-0242ac110007.15f33f8cab511611], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-9782b318-4f18-11ea-af88-0242ac110007.15f33f8d3dc5a367], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-9782b318-4f18-11ea-af88-0242ac110007.15f33f8d779f2e5b], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f33f8dec26908e], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 10:56:14.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-zpfzs" for this suite.
Feb 14 10:56:22.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 10:56:22.417: INFO: namespace: e2e-tests-sched-pred-zpfzs, resource: bindings, ignored listing per whitelist
Feb 14 10:56:22.423: INFO: namespace e2e-tests-sched-pred-zpfzs deletion completed in 8.269265502s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:20.109 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 10:56:22.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-a3fbfd02-4f18-11ea-af88-0242ac110007
STEP: Creating a pod to test consume secrets
Feb 14 10:56:23.877: INFO: Waiting up to 5m0s for pod "pod-secrets-a410be3c-4f18-11ea-af88-0242ac110007" in namespace "e2e-tests-secrets-pqkdd" to be "success or failure"
Feb 14 10:56:23.884: INFO: Pod "pod-secrets-a410be3c-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.330472ms
Feb 14 10:56:25.944: INFO: Pod "pod-secrets-a410be3c-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067192953s
Feb 14 10:56:27.969: INFO: Pod "pod-secrets-a410be3c-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092225594s
Feb 14 10:56:30.214: INFO: Pod "pod-secrets-a410be3c-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.336302483s
Feb 14 10:56:32.425: INFO: Pod "pod-secrets-a410be3c-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547649717s
Feb 14 10:56:34.446: INFO: Pod "pod-secrets-a410be3c-4f18-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.569000992s
STEP: Saw pod success
Feb 14 10:56:34.446: INFO: Pod "pod-secrets-a410be3c-4f18-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 10:56:34.476: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-a410be3c-4f18-11ea-af88-0242ac110007 container secret-volume-test: 
STEP: delete the pod
Feb 14 10:56:34.659: INFO: Waiting for pod pod-secrets-a410be3c-4f18-11ea-af88-0242ac110007 to disappear
Feb 14 10:56:34.675: INFO: Pod pod-secrets-a410be3c-4f18-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 10:56:34.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-pqkdd" for this suite.
Feb 14 10:56:42.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 10:56:42.921: INFO: namespace: e2e-tests-secrets-pqkdd, resource: bindings, ignored listing per whitelist
Feb 14 10:56:43.020: INFO: namespace e2e-tests-secrets-pqkdd deletion completed in 8.323305203s

• [SLOW TEST:20.596 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 10:56:43.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-af8f9866-4f18-11ea-af88-0242ac110007
STEP: Creating a pod to test consume secrets
Feb 14 10:56:43.252: INFO: Waiting up to 5m0s for pod "pod-secrets-af9c76e3-4f18-11ea-af88-0242ac110007" in namespace "e2e-tests-secrets-v2m4r" to be "success or failure"
Feb 14 10:56:43.262: INFO: Pod "pod-secrets-af9c76e3-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.573118ms
Feb 14 10:56:45.279: INFO: Pod "pod-secrets-af9c76e3-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027018208s
Feb 14 10:56:47.304: INFO: Pod "pod-secrets-af9c76e3-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051911067s
Feb 14 10:56:49.584: INFO: Pod "pod-secrets-af9c76e3-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.332430748s
Feb 14 10:56:51.605: INFO: Pod "pod-secrets-af9c76e3-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.352697851s
Feb 14 10:56:53.628: INFO: Pod "pod-secrets-af9c76e3-4f18-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.376660992s
STEP: Saw pod success
Feb 14 10:56:53.629: INFO: Pod "pod-secrets-af9c76e3-4f18-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 10:56:53.660: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-af9c76e3-4f18-11ea-af88-0242ac110007 container secret-volume-test: 
STEP: delete the pod
Feb 14 10:56:53.883: INFO: Waiting for pod pod-secrets-af9c76e3-4f18-11ea-af88-0242ac110007 to disappear
Feb 14 10:56:53.899: INFO: Pod pod-secrets-af9c76e3-4f18-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 10:56:53.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-v2m4r" for this suite.
Feb 14 10:56:59.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 10:57:00.133: INFO: namespace: e2e-tests-secrets-v2m4r, resource: bindings, ignored listing per whitelist
Feb 14 10:57:00.150: INFO: namespace e2e-tests-secrets-v2m4r deletion completed in 6.237922559s

• [SLOW TEST:17.128 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 10:57:00.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-b9cea18f-4f18-11ea-af88-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 14 10:57:00.353: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b9cf7ee4-4f18-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-5hmw8" to be "success or failure"
Feb 14 10:57:00.435: INFO: Pod "pod-projected-configmaps-b9cf7ee4-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 82.360795ms
Feb 14 10:57:02.956: INFO: Pod "pod-projected-configmaps-b9cf7ee4-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.60311333s
Feb 14 10:57:04.976: INFO: Pod "pod-projected-configmaps-b9cf7ee4-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.622956555s
Feb 14 10:57:07.028: INFO: Pod "pod-projected-configmaps-b9cf7ee4-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.675341734s
Feb 14 10:57:09.044: INFO: Pod "pod-projected-configmaps-b9cf7ee4-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.691071707s
Feb 14 10:57:11.233: INFO: Pod "pod-projected-configmaps-b9cf7ee4-4f18-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.880409752s
STEP: Saw pod success
Feb 14 10:57:11.233: INFO: Pod "pod-projected-configmaps-b9cf7ee4-4f18-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 10:57:11.245: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-b9cf7ee4-4f18-11ea-af88-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 14 10:57:11.582: INFO: Waiting for pod pod-projected-configmaps-b9cf7ee4-4f18-11ea-af88-0242ac110007 to disappear
Feb 14 10:57:11.614: INFO: Pod pod-projected-configmaps-b9cf7ee4-4f18-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 10:57:11.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5hmw8" for this suite.
Feb 14 10:57:17.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 10:57:17.987: INFO: namespace: e2e-tests-projected-5hmw8, resource: bindings, ignored listing per whitelist
Feb 14 10:57:18.878: INFO: namespace e2e-tests-projected-5hmw8 deletion completed in 7.197937079s

• [SLOW TEST:18.728 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 10:57:18.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 14 10:57:19.083: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4f2b2e7-4f18-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-h54zp" to be "success or failure"
Feb 14 10:57:19.089: INFO: Pod "downwardapi-volume-c4f2b2e7-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.602816ms
Feb 14 10:57:21.279: INFO: Pod "downwardapi-volume-c4f2b2e7-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196090522s
Feb 14 10:57:23.297: INFO: Pod "downwardapi-volume-c4f2b2e7-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213977391s
Feb 14 10:57:25.403: INFO: Pod "downwardapi-volume-c4f2b2e7-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.320680561s
Feb 14 10:57:27.417: INFO: Pod "downwardapi-volume-c4f2b2e7-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.334560083s
Feb 14 10:57:29.974: INFO: Pod "downwardapi-volume-c4f2b2e7-4f18-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.89094458s
STEP: Saw pod success
Feb 14 10:57:29.974: INFO: Pod "downwardapi-volume-c4f2b2e7-4f18-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 10:57:29.986: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c4f2b2e7-4f18-11ea-af88-0242ac110007 container client-container: 
STEP: delete the pod
Feb 14 10:57:30.211: INFO: Waiting for pod downwardapi-volume-c4f2b2e7-4f18-11ea-af88-0242ac110007 to disappear
Feb 14 10:57:30.230: INFO: Pod downwardapi-volume-c4f2b2e7-4f18-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 10:57:30.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-h54zp" for this suite.
Feb 14 10:57:36.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 10:57:36.575: INFO: namespace: e2e-tests-projected-h54zp, resource: bindings, ignored listing per whitelist
Feb 14 10:57:36.668: INFO: namespace e2e-tests-projected-h54zp deletion completed in 6.42568372s

• [SLOW TEST:17.790 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 10:57:36.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-pc4z4 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-pc4z4;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-pc4z4 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-pc4z4;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-pc4z4.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-pc4z4.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-pc4z4.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-pc4z4.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-pc4z4.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-pc4z4.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-pc4z4.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pc4z4.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-pc4z4.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-pc4z4.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-pc4z4.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-pc4z4.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-pc4z4.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 39.127.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.127.39_udp@PTR;check="$$(dig +tcp +noall +answer +search 39.127.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.127.39_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-pc4z4 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-pc4z4;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-pc4z4 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-pc4z4;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-pc4z4.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-pc4z4.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-pc4z4.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-pc4z4.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-pc4z4.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pc4z4.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-pc4z4.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pc4z4.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-pc4z4.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-pc4z4.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-pc4z4.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-pc4z4.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-pc4z4.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 39.127.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.127.39_udp@PTR;check="$$(dig +tcp +noall +answer +search 39.127.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.127.39_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 14 10:57:51.186: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.192: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.203: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-pc4z4 from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.209: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-pc4z4 from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.215: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-pc4z4.svc from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.220: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-pc4z4.svc from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.225: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-pc4z4.svc from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.232: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pc4z4.svc from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.242: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-pc4z4.svc from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.251: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-pc4z4.svc from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.270: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.275: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.280: INFO: Unable to read 10.106.127.39_udp@PTR from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.285: INFO: Unable to read 10.106.127.39_tcp@PTR from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.291: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.295: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.300: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pc4z4 from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.304: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pc4z4 from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.307: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-pc4z4.svc from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.311: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-pc4z4.svc from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.315: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pc4z4.svc from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.319: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pc4z4.svc from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.322: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-pc4z4.svc from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.326: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-pc4z4.svc from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.370: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.374: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.377: INFO: Unable to read 10.106.127.39_udp@PTR from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.381: INFO: Unable to read 10.106.127.39_tcp@PTR from pod e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007)
Feb 14 10:57:51.381: INFO: Lookups using e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-pc4z4 wheezy_tcp@dns-test-service.e2e-tests-dns-pc4z4 wheezy_udp@dns-test-service.e2e-tests-dns-pc4z4.svc wheezy_tcp@dns-test-service.e2e-tests-dns-pc4z4.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-pc4z4.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pc4z4.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-pc4z4.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-pc4z4.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.106.127.39_udp@PTR 10.106.127.39_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-pc4z4 jessie_tcp@dns-test-service.e2e-tests-dns-pc4z4 jessie_udp@dns-test-service.e2e-tests-dns-pc4z4.svc jessie_tcp@dns-test-service.e2e-tests-dns-pc4z4.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-pc4z4.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-pc4z4.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-pc4z4.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-pc4z4.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.106.127.39_udp@PTR 10.106.127.39_tcp@PTR]

Feb 14 10:57:56.925: INFO: DNS probes using e2e-tests-dns-pc4z4/dns-test-cfb16fc7-4f18-11ea-af88-0242ac110007 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 10:57:57.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-pc4z4" for this suite.
Feb 14 10:58:03.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 10:58:03.721: INFO: namespace: e2e-tests-dns-pc4z4, resource: bindings, ignored listing per whitelist
Feb 14 10:58:03.809: INFO: namespace e2e-tests-dns-pc4z4 deletion completed in 6.293381769s

• [SLOW TEST:27.141 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 10:58:03.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 14 10:58:04.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-9stpt'
Feb 14 10:58:06.447: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 14 10:58:06.448: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Feb 14 10:58:10.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-9stpt'
Feb 14 10:58:10.724: INFO: stderr: ""
Feb 14 10:58:10.724: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 10:58:10.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-9stpt" for this suite.
Feb 14 10:58:16.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 10:58:16.822: INFO: namespace: e2e-tests-kubectl-9stpt, resource: bindings, ignored listing per whitelist
Feb 14 10:58:16.909: INFO: namespace e2e-tests-kubectl-9stpt deletion completed in 6.17877952s

• [SLOW TEST:13.100 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 10:58:16.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 14 10:58:17.123: INFO: Waiting up to 5m0s for pod "pod-e79054ba-4f18-11ea-af88-0242ac110007" in namespace "e2e-tests-emptydir-5lkrr" to be "success or failure"
Feb 14 10:58:17.227: INFO: Pod "pod-e79054ba-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 103.741155ms
Feb 14 10:58:19.242: INFO: Pod "pod-e79054ba-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119188457s
Feb 14 10:58:21.255: INFO: Pod "pod-e79054ba-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131800275s
Feb 14 10:58:23.272: INFO: Pod "pod-e79054ba-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.149238557s
Feb 14 10:58:25.300: INFO: Pod "pod-e79054ba-4f18-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.177391297s
Feb 14 10:58:27.311: INFO: Pod "pod-e79054ba-4f18-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.187955323s
STEP: Saw pod success
Feb 14 10:58:27.311: INFO: Pod "pod-e79054ba-4f18-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 10:58:27.314: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-e79054ba-4f18-11ea-af88-0242ac110007 container test-container: 
STEP: delete the pod
Feb 14 10:58:28.084: INFO: Waiting for pod pod-e79054ba-4f18-11ea-af88-0242ac110007 to disappear
Feb 14 10:58:28.091: INFO: Pod pod-e79054ba-4f18-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 10:58:28.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-5lkrr" for this suite.
Feb 14 10:58:34.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 10:58:35.310: INFO: namespace: e2e-tests-emptydir-5lkrr, resource: bindings, ignored listing per whitelist
Feb 14 10:58:35.328: INFO: namespace e2e-tests-emptydir-5lkrr deletion completed in 7.170816485s

• [SLOW TEST:18.418 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 10:58:35.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-4l2vv
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-4l2vv
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-4l2vv
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-4l2vv
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-4l2vv
Feb 14 10:58:47.956: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-4l2vv, name: ss-0, uid: f908152b-4f18-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete.
Feb 14 10:58:52.501: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-4l2vv, name: ss-0, uid: f908152b-4f18-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb 14 10:58:52.565: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-4l2vv, name: ss-0, uid: f908152b-4f18-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete.
Feb 14 10:58:52.586: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-4l2vv
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-4l2vv
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-4l2vv and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 14 10:59:05.232: INFO: Deleting all statefulset in ns e2e-tests-statefulset-4l2vv
Feb 14 10:59:05.246: INFO: Scaling statefulset ss to 0
Feb 14 10:59:25.307: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 10:59:25.318: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 10:59:25.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-4l2vv" for this suite.
Feb 14 10:59:31.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 10:59:31.629: INFO: namespace: e2e-tests-statefulset-4l2vv, resource: bindings, ignored listing per whitelist
Feb 14 10:59:31.724: INFO: namespace e2e-tests-statefulset-4l2vv deletion completed in 6.216332198s

• [SLOW TEST:56.395 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 10:59:31.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 14 10:59:32.273: INFO: Number of nodes with available pods: 0
Feb 14 10:59:32.273: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:33.331: INFO: Number of nodes with available pods: 0
Feb 14 10:59:33.331: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:34.333: INFO: Number of nodes with available pods: 0
Feb 14 10:59:34.333: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:35.285: INFO: Number of nodes with available pods: 0
Feb 14 10:59:35.285: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:36.303: INFO: Number of nodes with available pods: 0
Feb 14 10:59:36.303: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:37.433: INFO: Number of nodes with available pods: 0
Feb 14 10:59:37.433: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:38.295: INFO: Number of nodes with available pods: 0
Feb 14 10:59:38.295: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:39.294: INFO: Number of nodes with available pods: 0
Feb 14 10:59:39.294: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:40.301: INFO: Number of nodes with available pods: 0
Feb 14 10:59:40.301: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:41.298: INFO: Number of nodes with available pods: 1
Feb 14 10:59:41.298: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 14 10:59:41.357: INFO: Number of nodes with available pods: 0
Feb 14 10:59:41.358: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:42.392: INFO: Number of nodes with available pods: 0
Feb 14 10:59:42.393: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:43.378: INFO: Number of nodes with available pods: 0
Feb 14 10:59:43.378: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:44.382: INFO: Number of nodes with available pods: 0
Feb 14 10:59:44.382: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:45.382: INFO: Number of nodes with available pods: 0
Feb 14 10:59:45.382: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:46.410: INFO: Number of nodes with available pods: 0
Feb 14 10:59:46.410: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:47.380: INFO: Number of nodes with available pods: 0
Feb 14 10:59:47.380: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:48.684: INFO: Number of nodes with available pods: 0
Feb 14 10:59:48.685: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:49.434: INFO: Number of nodes with available pods: 0
Feb 14 10:59:49.434: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:50.388: INFO: Number of nodes with available pods: 0
Feb 14 10:59:50.388: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:51.381: INFO: Number of nodes with available pods: 0
Feb 14 10:59:51.381: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:52.429: INFO: Number of nodes with available pods: 0
Feb 14 10:59:52.430: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:53.384: INFO: Number of nodes with available pods: 0
Feb 14 10:59:53.384: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:54.391: INFO: Number of nodes with available pods: 0
Feb 14 10:59:54.391: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 10:59:55.390: INFO: Number of nodes with available pods: 1
Feb 14 10:59:55.390: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-xcrdw, will wait for the garbage collector to delete the pods
Feb 14 10:59:55.480: INFO: Deleting DaemonSet.extensions daemon-set took: 28.485224ms
Feb 14 10:59:55.681: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.633592ms
Feb 14 11:00:12.702: INFO: Number of nodes with available pods: 0
Feb 14 11:00:12.703: INFO: Number of running nodes: 0, number of available pods: 0
Feb 14 11:00:12.786: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-xcrdw/daemonsets","resourceVersion":"21632630"},"items":null}

Feb 14 11:00:12.811: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-xcrdw/pods","resourceVersion":"21632630"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:00:12.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-xcrdw" for this suite.
Feb 14 11:00:18.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:00:18.921: INFO: namespace: e2e-tests-daemonsets-xcrdw, resource: bindings, ignored listing per whitelist
Feb 14 11:00:19.124: INFO: namespace e2e-tests-daemonsets-xcrdw deletion completed in 6.292591422s

• [SLOW TEST:47.400 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:00:19.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb 14 11:00:31.948: INFO: Successfully updated pod "labelsupdate306a3bd9-4f19-11ea-af88-0242ac110007"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:00:34.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-h6s8k" for this suite.
Feb 14 11:00:56.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:00:56.463: INFO: namespace: e2e-tests-projected-h6s8k, resource: bindings, ignored listing per whitelist
Feb 14 11:00:56.507: INFO: namespace e2e-tests-projected-h6s8k deletion completed in 22.443595122s

• [SLOW TEST:37.383 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:00:56.507: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 14 11:00:56.776: INFO: Waiting up to 5m0s for pod "pod-46b84ca6-4f19-11ea-af88-0242ac110007" in namespace "e2e-tests-emptydir-fl6jd" to be "success or failure"
Feb 14 11:00:56.808: INFO: Pod "pod-46b84ca6-4f19-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 31.05511ms
Feb 14 11:00:59.000: INFO: Pod "pod-46b84ca6-4f19-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223539029s
Feb 14 11:01:01.025: INFO: Pod "pod-46b84ca6-4f19-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.248411451s
Feb 14 11:01:03.115: INFO: Pod "pod-46b84ca6-4f19-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.338292064s
Feb 14 11:01:05.123: INFO: Pod "pod-46b84ca6-4f19-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.346580686s
Feb 14 11:01:07.142: INFO: Pod "pod-46b84ca6-4f19-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.365347165s
STEP: Saw pod success
Feb 14 11:01:07.142: INFO: Pod "pod-46b84ca6-4f19-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:01:07.146: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-46b84ca6-4f19-11ea-af88-0242ac110007 container test-container: 
STEP: delete the pod
Feb 14 11:01:07.289: INFO: Waiting for pod pod-46b84ca6-4f19-11ea-af88-0242ac110007 to disappear
Feb 14 11:01:07.374: INFO: Pod pod-46b84ca6-4f19-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:01:07.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fl6jd" for this suite.
Feb 14 11:01:13.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:01:13.661: INFO: namespace: e2e-tests-emptydir-fl6jd, resource: bindings, ignored listing per whitelist
Feb 14 11:01:13.759: INFO: namespace e2e-tests-emptydir-fl6jd deletion completed in 6.369927445s

• [SLOW TEST:17.252 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:01:13.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:01:26.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-fc9bq" for this suite.
Feb 14 11:01:32.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:01:32.646: INFO: namespace: e2e-tests-kubelet-test-fc9bq, resource: bindings, ignored listing per whitelist
Feb 14 11:01:32.688: INFO: namespace e2e-tests-kubelet-test-fc9bq deletion completed in 6.308656799s

• [SLOW TEST:18.928 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:01:32.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 14 11:01:32.863: INFO: Creating ReplicaSet my-hostname-basic-5c3effaa-4f19-11ea-af88-0242ac110007
Feb 14 11:01:32.904: INFO: Pod name my-hostname-basic-5c3effaa-4f19-11ea-af88-0242ac110007: Found 0 pods out of 1
Feb 14 11:01:38.434: INFO: Pod name my-hostname-basic-5c3effaa-4f19-11ea-af88-0242ac110007: Found 1 pods out of 1
Feb 14 11:01:38.434: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5c3effaa-4f19-11ea-af88-0242ac110007" is running
Feb 14 11:01:42.469: INFO: Pod "my-hostname-basic-5c3effaa-4f19-11ea-af88-0242ac110007-9fgsk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 11:01:33 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 11:01:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5c3effaa-4f19-11ea-af88-0242ac110007]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 11:01:33 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5c3effaa-4f19-11ea-af88-0242ac110007]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 11:01:32 +0000 UTC Reason: Message:}])
Feb 14 11:01:42.470: INFO: Trying to dial the pod
Feb 14 11:01:47.547: INFO: Controller my-hostname-basic-5c3effaa-4f19-11ea-af88-0242ac110007: Got expected result from replica 1 [my-hostname-basic-5c3effaa-4f19-11ea-af88-0242ac110007-9fgsk]: "my-hostname-basic-5c3effaa-4f19-11ea-af88-0242ac110007-9fgsk", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:01:47.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-9lcwh" for this suite.
Feb 14 11:01:53.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:01:53.916: INFO: namespace: e2e-tests-replicaset-9lcwh, resource: bindings, ignored listing per whitelist
Feb 14 11:01:53.919: INFO: namespace e2e-tests-replicaset-9lcwh deletion completed in 6.296948505s

• [SLOW TEST:21.230 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:01:53.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0214 11:02:04.588847       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 14 11:02:04.588: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:02:04.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-nm5l5" for this suite.
Feb 14 11:02:11.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:02:11.215: INFO: namespace: e2e-tests-gc-nm5l5, resource: bindings, ignored listing per whitelist
Feb 14 11:02:11.346: INFO: namespace e2e-tests-gc-nm5l5 deletion completed in 6.745684087s

• [SLOW TEST:17.426 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:02:11.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Feb 14 11:02:11.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-f9kkn'
Feb 14 11:02:12.121: INFO: stderr: ""
Feb 14 11:02:12.122: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 14 11:02:12.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-f9kkn'
Feb 14 11:02:12.485: INFO: stderr: ""
Feb 14 11:02:12.486: INFO: stdout: "update-demo-nautilus-5qwh4 update-demo-nautilus-cntkv "
Feb 14 11:02:12.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qwh4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-f9kkn'
Feb 14 11:02:12.800: INFO: stderr: ""
Feb 14 11:02:12.800: INFO: stdout: ""
Feb 14 11:02:12.800: INFO: update-demo-nautilus-5qwh4 is created but not running
Feb 14 11:02:17.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-f9kkn'
Feb 14 11:02:17.974: INFO: stderr: ""
Feb 14 11:02:17.974: INFO: stdout: "update-demo-nautilus-5qwh4 update-demo-nautilus-cntkv "
Feb 14 11:02:17.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qwh4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-f9kkn'
Feb 14 11:02:18.264: INFO: stderr: ""
Feb 14 11:02:18.264: INFO: stdout: ""
Feb 14 11:02:18.264: INFO: update-demo-nautilus-5qwh4 is created but not running
Feb 14 11:02:23.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-f9kkn'
Feb 14 11:02:23.641: INFO: stderr: ""
Feb 14 11:02:23.641: INFO: stdout: "update-demo-nautilus-5qwh4 update-demo-nautilus-cntkv "
Feb 14 11:02:23.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qwh4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-f9kkn'
Feb 14 11:02:23.866: INFO: stderr: ""
Feb 14 11:02:23.866: INFO: stdout: "true"
Feb 14 11:02:23.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qwh4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-f9kkn'
Feb 14 11:02:24.034: INFO: stderr: ""
Feb 14 11:02:24.034: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 11:02:24.034: INFO: validating pod update-demo-nautilus-5qwh4
Feb 14 11:02:24.084: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 11:02:24.085: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 11:02:24.085: INFO: update-demo-nautilus-5qwh4 is verified up and running
Feb 14 11:02:24.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cntkv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-f9kkn'
Feb 14 11:02:24.253: INFO: stderr: ""
Feb 14 11:02:24.253: INFO: stdout: ""
Feb 14 11:02:24.253: INFO: update-demo-nautilus-cntkv is created but not running
Feb 14 11:02:29.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-f9kkn'
Feb 14 11:02:29.443: INFO: stderr: ""
Feb 14 11:02:29.444: INFO: stdout: "update-demo-nautilus-5qwh4 update-demo-nautilus-cntkv "
Feb 14 11:02:29.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qwh4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-f9kkn'
Feb 14 11:02:29.605: INFO: stderr: ""
Feb 14 11:02:29.606: INFO: stdout: "true"
Feb 14 11:02:29.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5qwh4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-f9kkn'
Feb 14 11:02:29.713: INFO: stderr: ""
Feb 14 11:02:29.713: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 11:02:29.713: INFO: validating pod update-demo-nautilus-5qwh4
Feb 14 11:02:29.724: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 11:02:29.724: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 11:02:29.724: INFO: update-demo-nautilus-5qwh4 is verified up and running
Feb 14 11:02:29.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cntkv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-f9kkn'
Feb 14 11:02:29.840: INFO: stderr: ""
Feb 14 11:02:29.840: INFO: stdout: "true"
Feb 14 11:02:29.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cntkv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-f9kkn'
Feb 14 11:02:29.970: INFO: stderr: ""
Feb 14 11:02:29.971: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 11:02:29.971: INFO: validating pod update-demo-nautilus-cntkv
Feb 14 11:02:29.980: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 11:02:29.980: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 11:02:29.980: INFO: update-demo-nautilus-cntkv is verified up and running
STEP: rolling-update to new replication controller
Feb 14 11:02:29.983: INFO: scanned /root for discovery docs: 
Feb 14 11:02:29.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-f9kkn'
Feb 14 11:03:05.265: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 14 11:03:05.266: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 14 11:03:05.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-f9kkn'
Feb 14 11:03:05.497: INFO: stderr: ""
Feb 14 11:03:05.497: INFO: stdout: "update-demo-kitten-rtcgl update-demo-kitten-xdbsk "
Feb 14 11:03:05.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rtcgl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-f9kkn'
Feb 14 11:03:05.609: INFO: stderr: ""
Feb 14 11:03:05.609: INFO: stdout: "true"
Feb 14 11:03:05.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-rtcgl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-f9kkn'
Feb 14 11:03:05.735: INFO: stderr: ""
Feb 14 11:03:05.735: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 14 11:03:05.735: INFO: validating pod update-demo-kitten-rtcgl
Feb 14 11:03:05.749: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 14 11:03:05.749: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 14 11:03:05.749: INFO: update-demo-kitten-rtcgl is verified up and running
Feb 14 11:03:05.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xdbsk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-f9kkn'
Feb 14 11:03:05.881: INFO: stderr: ""
Feb 14 11:03:05.881: INFO: stdout: "true"
Feb 14 11:03:05.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xdbsk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-f9kkn'
Feb 14 11:03:06.044: INFO: stderr: ""
Feb 14 11:03:06.044: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 14 11:03:06.044: INFO: validating pod update-demo-kitten-xdbsk
Feb 14 11:03:06.055: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 14 11:03:06.055: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 14 11:03:06.055: INFO: update-demo-kitten-xdbsk is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:03:06.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-f9kkn" for this suite.
Feb 14 11:03:32.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:03:32.240: INFO: namespace: e2e-tests-kubectl-f9kkn, resource: bindings, ignored listing per whitelist
Feb 14 11:03:32.280: INFO: namespace e2e-tests-kubectl-f9kkn deletion completed in 26.220161535s

• [SLOW TEST:80.934 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:03:32.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 14 11:03:32.409: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:03:55.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-s9x4p" for this suite.
Feb 14 11:04:19.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:04:20.409: INFO: namespace: e2e-tests-init-container-s9x4p, resource: bindings, ignored listing per whitelist
Feb 14 11:04:20.452: INFO: namespace e2e-tests-init-container-s9x4p deletion completed in 24.615657618s

• [SLOW TEST:48.171 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:04:20.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Feb 14 11:04:31.147: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:05:04.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-8ztjz" for this suite.
Feb 14 11:05:10.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:05:10.675: INFO: namespace: e2e-tests-namespaces-8ztjz, resource: bindings, ignored listing per whitelist
Feb 14 11:05:10.823: INFO: namespace e2e-tests-namespaces-8ztjz deletion completed in 6.363972104s
STEP: Destroying namespace "e2e-tests-nsdeletetest-pfs9n" for this suite.
Feb 14 11:05:10.827: INFO: Namespace e2e-tests-nsdeletetest-pfs9n was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-9s6f7" for this suite.
Feb 14 11:05:16.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:05:17.075: INFO: namespace: e2e-tests-nsdeletetest-9s6f7, resource: bindings, ignored listing per whitelist
Feb 14 11:05:17.101: INFO: namespace e2e-tests-nsdeletetest-9s6f7 deletion completed in 6.274323488s

• [SLOW TEST:56.649 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:05:17.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-e20b697d-4f19-11ea-af88-0242ac110007
STEP: Creating a pod to test consume secrets
Feb 14 11:05:17.368: INFO: Waiting up to 5m0s for pod "pod-secrets-e20cdc49-4f19-11ea-af88-0242ac110007" in namespace "e2e-tests-secrets-7z4cv" to be "success or failure"
Feb 14 11:05:17.494: INFO: Pod "pod-secrets-e20cdc49-4f19-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 125.871482ms
Feb 14 11:05:19.511: INFO: Pod "pod-secrets-e20cdc49-4f19-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142705684s
Feb 14 11:05:21.530: INFO: Pod "pod-secrets-e20cdc49-4f19-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161511595s
Feb 14 11:05:23.870: INFO: Pod "pod-secrets-e20cdc49-4f19-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.501864012s
Feb 14 11:05:25.897: INFO: Pod "pod-secrets-e20cdc49-4f19-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.528635839s
Feb 14 11:05:27.913: INFO: Pod "pod-secrets-e20cdc49-4f19-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.545026222s
STEP: Saw pod success
Feb 14 11:05:27.913: INFO: Pod "pod-secrets-e20cdc49-4f19-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:05:27.918: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-e20cdc49-4f19-11ea-af88-0242ac110007 container secret-volume-test: 
STEP: delete the pod
Feb 14 11:05:28.450: INFO: Waiting for pod pod-secrets-e20cdc49-4f19-11ea-af88-0242ac110007 to disappear
Feb 14 11:05:28.718: INFO: Pod pod-secrets-e20cdc49-4f19-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:05:28.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-7z4cv" for this suite.
Feb 14 11:05:34.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:05:34.947: INFO: namespace: e2e-tests-secrets-7z4cv, resource: bindings, ignored listing per whitelist
Feb 14 11:05:35.063: INFO: namespace e2e-tests-secrets-7z4cv deletion completed in 6.329107279s

• [SLOW TEST:17.962 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:05:35.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Feb 14 11:05:35.926: INFO: created pod pod-service-account-defaultsa
Feb 14 11:05:35.926: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb 14 11:05:35.940: INFO: created pod pod-service-account-mountsa
Feb 14 11:05:35.940: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb 14 11:05:35.962: INFO: created pod pod-service-account-nomountsa
Feb 14 11:05:35.962: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb 14 11:05:35.981: INFO: created pod pod-service-account-defaultsa-mountspec
Feb 14 11:05:35.981: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb 14 11:05:36.175: INFO: created pod pod-service-account-mountsa-mountspec
Feb 14 11:05:36.175: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb 14 11:05:36.383: INFO: created pod pod-service-account-nomountsa-mountspec
Feb 14 11:05:36.383: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb 14 11:05:36.450: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb 14 11:05:36.450: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb 14 11:05:37.480: INFO: created pod pod-service-account-mountsa-nomountspec
Feb 14 11:05:37.481: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb 14 11:05:37.529: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb 14 11:05:37.529: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:05:37.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-7f9b8" for this suite.
Feb 14 11:06:09.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:06:09.865: INFO: namespace: e2e-tests-svcaccounts-7f9b8, resource: bindings, ignored listing per whitelist
Feb 14 11:06:09.958: INFO: namespace e2e-tests-svcaccounts-7f9b8 deletion completed in 31.381801185s

• [SLOW TEST:34.894 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:06:09.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:06:23.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-r742s" for this suite.
Feb 14 11:06:47.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:06:47.397: INFO: namespace: e2e-tests-replication-controller-r742s, resource: bindings, ignored listing per whitelist
Feb 14 11:06:47.560: INFO: namespace e2e-tests-replication-controller-r742s deletion completed in 24.228176379s

• [SLOW TEST:37.602 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:06:47.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb 14 11:06:48.310: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mgszs,SelfLink:/api/v1/namespaces/e2e-tests-watch-mgszs/configmaps/e2e-watch-test-label-changed,UID:18402b28-4f1a-11ea-a994-fa163e34d433,ResourceVersion:21633616,Generation:0,CreationTimestamp:2020-02-14 11:06:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 14 11:06:48.311: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mgszs,SelfLink:/api/v1/namespaces/e2e-tests-watch-mgszs/configmaps/e2e-watch-test-label-changed,UID:18402b28-4f1a-11ea-a994-fa163e34d433,ResourceVersion:21633617,Generation:0,CreationTimestamp:2020-02-14 11:06:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 14 11:06:48.311: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mgszs,SelfLink:/api/v1/namespaces/e2e-tests-watch-mgszs/configmaps/e2e-watch-test-label-changed,UID:18402b28-4f1a-11ea-a994-fa163e34d433,ResourceVersion:21633618,Generation:0,CreationTimestamp:2020-02-14 11:06:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb 14 11:06:58.595: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mgszs,SelfLink:/api/v1/namespaces/e2e-tests-watch-mgszs/configmaps/e2e-watch-test-label-changed,UID:18402b28-4f1a-11ea-a994-fa163e34d433,ResourceVersion:21633632,Generation:0,CreationTimestamp:2020-02-14 11:06:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 14 11:06:58.596: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mgszs,SelfLink:/api/v1/namespaces/e2e-tests-watch-mgszs/configmaps/e2e-watch-test-label-changed,UID:18402b28-4f1a-11ea-a994-fa163e34d433,ResourceVersion:21633633,Generation:0,CreationTimestamp:2020-02-14 11:06:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb 14 11:06:58.596: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-mgszs,SelfLink:/api/v1/namespaces/e2e-tests-watch-mgszs/configmaps/e2e-watch-test-label-changed,UID:18402b28-4f1a-11ea-a994-fa163e34d433,ResourceVersion:21633634,Generation:0,CreationTimestamp:2020-02-14 11:06:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:06:58.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-mgszs" for this suite.
Feb 14 11:07:06.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:07:06.721: INFO: namespace: e2e-tests-watch-mgszs, resource: bindings, ignored listing per whitelist
Feb 14 11:07:06.828: INFO: namespace e2e-tests-watch-mgszs deletion completed in 8.219555359s

• [SLOW TEST:19.267 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:07:06.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Feb 14 11:07:07.108: INFO: Waiting up to 5m0s for pod "var-expansion-237661ed-4f1a-11ea-af88-0242ac110007" in namespace "e2e-tests-var-expansion-w59k8" to be "success or failure"
Feb 14 11:07:07.210: INFO: Pod "var-expansion-237661ed-4f1a-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 101.302046ms
Feb 14 11:07:09.296: INFO: Pod "var-expansion-237661ed-4f1a-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187421716s
Feb 14 11:07:11.309: INFO: Pod "var-expansion-237661ed-4f1a-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200566431s
Feb 14 11:07:13.460: INFO: Pod "var-expansion-237661ed-4f1a-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.351507145s
Feb 14 11:07:15.487: INFO: Pod "var-expansion-237661ed-4f1a-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.378107918s
Feb 14 11:07:17.512: INFO: Pod "var-expansion-237661ed-4f1a-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.403495971s
STEP: Saw pod success
Feb 14 11:07:17.512: INFO: Pod "var-expansion-237661ed-4f1a-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:07:17.524: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-237661ed-4f1a-11ea-af88-0242ac110007 container dapi-container: 
STEP: delete the pod
Feb 14 11:07:18.189: INFO: Waiting for pod var-expansion-237661ed-4f1a-11ea-af88-0242ac110007 to disappear
Feb 14 11:07:18.517: INFO: Pod var-expansion-237661ed-4f1a-11ea-af88-0242ac110007 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:07:18.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-w59k8" for this suite.
Feb 14 11:07:26.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:07:26.981: INFO: namespace: e2e-tests-var-expansion-w59k8, resource: bindings, ignored listing per whitelist
Feb 14 11:07:27.000: INFO: namespace e2e-tests-var-expansion-w59k8 deletion completed in 8.447572435s

• [SLOW TEST:20.172 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:07:27.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-psxcd
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 14 11:07:27.101: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 14 11:08:01.391: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-psxcd PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 11:08:01.391: INFO: >>> kubeConfig: /root/.kube/config
I0214 11:08:01.497203       8 log.go:172] (0xc000665a20) (0xc00033fae0) Create stream
I0214 11:08:01.497337       8 log.go:172] (0xc000665a20) (0xc00033fae0) Stream added, broadcasting: 1
I0214 11:08:01.518205       8 log.go:172] (0xc000665a20) Reply frame received for 1
I0214 11:08:01.518291       8 log.go:172] (0xc000665a20) (0xc001e12640) Create stream
I0214 11:08:01.518307       8 log.go:172] (0xc000665a20) (0xc001e12640) Stream added, broadcasting: 3
I0214 11:08:01.519980       8 log.go:172] (0xc000665a20) Reply frame received for 3
I0214 11:08:01.520002       8 log.go:172] (0xc000665a20) (0xc00033fd60) Create stream
I0214 11:08:01.520014       8 log.go:172] (0xc000665a20) (0xc00033fd60) Stream added, broadcasting: 5
I0214 11:08:01.521618       8 log.go:172] (0xc000665a20) Reply frame received for 5
I0214 11:08:01.952064       8 log.go:172] (0xc000665a20) Data frame received for 3
I0214 11:08:01.952193       8 log.go:172] (0xc001e12640) (3) Data frame handling
I0214 11:08:01.952236       8 log.go:172] (0xc001e12640) (3) Data frame sent
I0214 11:08:02.218836       8 log.go:172] (0xc000665a20) (0xc001e12640) Stream removed, broadcasting: 3
I0214 11:08:02.219353       8 log.go:172] (0xc000665a20) (0xc00033fd60) Stream removed, broadcasting: 5
I0214 11:08:02.219886       8 log.go:172] (0xc000665a20) Data frame received for 1
I0214 11:08:02.220086       8 log.go:172] (0xc00033fae0) (1) Data frame handling
I0214 11:08:02.220138       8 log.go:172] (0xc00033fae0) (1) Data frame sent
I0214 11:08:02.220175       8 log.go:172] (0xc000665a20) (0xc00033fae0) Stream removed, broadcasting: 1
I0214 11:08:02.220316       8 log.go:172] (0xc000665a20) Go away received
I0214 11:08:02.221154       8 log.go:172] (0xc000665a20) (0xc00033fae0) Stream removed, broadcasting: 1
I0214 11:08:02.221230       8 log.go:172] (0xc000665a20) (0xc001e12640) Stream removed, broadcasting: 3
I0214 11:08:02.221259       8 log.go:172] (0xc000665a20) (0xc00033fd60) Stream removed, broadcasting: 5
Feb 14 11:08:02.221: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:08:02.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-psxcd" for this suite.
Feb 14 11:08:28.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:08:28.564: INFO: namespace: e2e-tests-pod-network-test-psxcd, resource: bindings, ignored listing per whitelist
Feb 14 11:08:28.569: INFO: namespace e2e-tests-pod-network-test-psxcd deletion completed in 26.312091407s

• [SLOW TEST:61.568 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:08:28.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-54201c85-4f1a-11ea-af88-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 14 11:08:28.831: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5420d414-4f1a-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-44wnz" to be "success or failure"
Feb 14 11:08:28.861: INFO: Pod "pod-projected-configmaps-5420d414-4f1a-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 30.173054ms
Feb 14 11:08:30.875: INFO: Pod "pod-projected-configmaps-5420d414-4f1a-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04423374s
Feb 14 11:08:32.906: INFO: Pod "pod-projected-configmaps-5420d414-4f1a-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075237449s
Feb 14 11:08:35.314: INFO: Pod "pod-projected-configmaps-5420d414-4f1a-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.483297821s
Feb 14 11:08:37.379: INFO: Pod "pod-projected-configmaps-5420d414-4f1a-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.548333889s
Feb 14 11:08:39.394: INFO: Pod "pod-projected-configmaps-5420d414-4f1a-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.562692484s
STEP: Saw pod success
Feb 14 11:08:39.394: INFO: Pod "pod-projected-configmaps-5420d414-4f1a-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:08:39.400: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-5420d414-4f1a-11ea-af88-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 14 11:08:40.085: INFO: Waiting for pod pod-projected-configmaps-5420d414-4f1a-11ea-af88-0242ac110007 to disappear
Feb 14 11:08:40.397: INFO: Pod pod-projected-configmaps-5420d414-4f1a-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:08:40.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-44wnz" for this suite.
Feb 14 11:08:48.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:08:48.721: INFO: namespace: e2e-tests-projected-44wnz, resource: bindings, ignored listing per whitelist
Feb 14 11:08:48.802: INFO: namespace e2e-tests-projected-44wnz deletion completed in 8.392525323s

• [SLOW TEST:20.233 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:08:48.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-knk8k
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Feb 14 11:08:49.420: INFO: Found 0 stateful pods, waiting for 3
Feb 14 11:08:59.445: INFO: Found 2 stateful pods, waiting for 3
Feb 14 11:09:09.461: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 11:09:09.461: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 11:09:09.461: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 14 11:09:19.454: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 11:09:19.454: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 11:09:19.454: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 14 11:09:19.559: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 14 11:09:29.693: INFO: Updating stateful set ss2
Feb 14 11:09:29.767: INFO: Waiting for Pod e2e-tests-statefulset-knk8k/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb 14 11:09:40.183: INFO: Found 2 stateful pods, waiting for 3
Feb 14 11:09:50.240: INFO: Found 2 stateful pods, waiting for 3
Feb 14 11:10:00.208: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 11:10:00.208: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 11:10:00.208: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 14 11:10:10.202: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 11:10:10.202: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 11:10:10.202: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 14 11:10:10.259: INFO: Updating stateful set ss2
Feb 14 11:10:10.292: INFO: Waiting for Pod e2e-tests-statefulset-knk8k/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 14 11:10:20.333: INFO: Waiting for Pod e2e-tests-statefulset-knk8k/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 14 11:10:30.419: INFO: Updating stateful set ss2
Feb 14 11:10:30.449: INFO: Waiting for StatefulSet e2e-tests-statefulset-knk8k/ss2 to complete update
Feb 14 11:10:30.449: INFO: Waiting for Pod e2e-tests-statefulset-knk8k/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 14 11:10:40.489: INFO: Waiting for StatefulSet e2e-tests-statefulset-knk8k/ss2 to complete update
Feb 14 11:10:40.490: INFO: Waiting for Pod e2e-tests-statefulset-knk8k/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 14 11:10:50.494: INFO: Waiting for StatefulSet e2e-tests-statefulset-knk8k/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 14 11:11:00.506: INFO: Deleting all statefulset in ns e2e-tests-statefulset-knk8k
Feb 14 11:11:00.517: INFO: Scaling statefulset ss2 to 0
Feb 14 11:11:30.608: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 11:11:30.620: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:11:30.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-knk8k" for this suite.
Feb 14 11:11:38.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:11:38.736: INFO: namespace: e2e-tests-statefulset-knk8k, resource: bindings, ignored listing per whitelist
Feb 14 11:11:38.850: INFO: namespace e2e-tests-statefulset-knk8k deletion completed in 8.187762048s

• [SLOW TEST:170.048 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:11:38.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 14 11:11:39.039: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:11:55.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-p62xz" for this suite.
Feb 14 11:12:02.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:12:02.322: INFO: namespace: e2e-tests-init-container-p62xz, resource: bindings, ignored listing per whitelist
Feb 14 11:12:02.375: INFO: namespace e2e-tests-init-container-p62xz deletion completed in 6.261169265s

• [SLOW TEST:23.525 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:12:02.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-d39f691a-4f1a-11ea-af88-0242ac110007
STEP: Creating a pod to test consume secrets
Feb 14 11:12:02.710: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d3a24d1b-4f1a-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-jpffs" to be "success or failure"
Feb 14 11:12:02.720: INFO: Pod "pod-projected-secrets-d3a24d1b-4f1a-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.253267ms
Feb 14 11:12:04.736: INFO: Pod "pod-projected-secrets-d3a24d1b-4f1a-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025215275s
Feb 14 11:12:06.765: INFO: Pod "pod-projected-secrets-d3a24d1b-4f1a-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054680774s
Feb 14 11:12:08.787: INFO: Pod "pod-projected-secrets-d3a24d1b-4f1a-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076296478s
Feb 14 11:12:11.287: INFO: Pod "pod-projected-secrets-d3a24d1b-4f1a-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.576218585s
Feb 14 11:12:13.302: INFO: Pod "pod-projected-secrets-d3a24d1b-4f1a-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.591661999s
STEP: Saw pod success
Feb 14 11:12:13.302: INFO: Pod "pod-projected-secrets-d3a24d1b-4f1a-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:12:13.308: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-d3a24d1b-4f1a-11ea-af88-0242ac110007 container projected-secret-volume-test: 
STEP: delete the pod
Feb 14 11:12:13.394: INFO: Waiting for pod pod-projected-secrets-d3a24d1b-4f1a-11ea-af88-0242ac110007 to disappear
Feb 14 11:12:13.456: INFO: Pod pod-projected-secrets-d3a24d1b-4f1a-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:12:13.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jpffs" for this suite.
Feb 14 11:12:19.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:12:19.617: INFO: namespace: e2e-tests-projected-jpffs, resource: bindings, ignored listing per whitelist
Feb 14 11:12:19.754: INFO: namespace e2e-tests-projected-jpffs deletion completed in 6.288229469s

• [SLOW TEST:17.379 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:12:19.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-vzf64
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-vzf64
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-vzf64
Feb 14 11:12:20.068: INFO: Found 0 stateful pods, waiting for 1
Feb 14 11:12:30.100: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb 14 11:12:30.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 14 11:12:30.698: INFO: stderr: "I0214 11:12:30.301609     652 log.go:172] (0xc0006c4420) (0xc000742640) Create stream\nI0214 11:12:30.301840     652 log.go:172] (0xc0006c4420) (0xc000742640) Stream added, broadcasting: 1\nI0214 11:12:30.309347     652 log.go:172] (0xc0006c4420) Reply frame received for 1\nI0214 11:12:30.309391     652 log.go:172] (0xc0006c4420) (0xc0007426e0) Create stream\nI0214 11:12:30.309399     652 log.go:172] (0xc0006c4420) (0xc0007426e0) Stream added, broadcasting: 3\nI0214 11:12:30.310249     652 log.go:172] (0xc0006c4420) Reply frame received for 3\nI0214 11:12:30.310279     652 log.go:172] (0xc0006c4420) (0xc0005f2c80) Create stream\nI0214 11:12:30.310295     652 log.go:172] (0xc0006c4420) (0xc0005f2c80) Stream added, broadcasting: 5\nI0214 11:12:30.311279     652 log.go:172] (0xc0006c4420) Reply frame received for 5\nI0214 11:12:30.578110     652 log.go:172] (0xc0006c4420) Data frame received for 3\nI0214 11:12:30.578199     652 log.go:172] (0xc0007426e0) (3) Data frame handling\nI0214 11:12:30.578234     652 log.go:172] (0xc0007426e0) (3) Data frame sent\nI0214 11:12:30.684233     652 log.go:172] (0xc0006c4420) Data frame received for 1\nI0214 11:12:30.684535     652 log.go:172] (0xc0006c4420) (0xc0005f2c80) Stream removed, broadcasting: 5\nI0214 11:12:30.684666     652 log.go:172] (0xc000742640) (1) Data frame handling\nI0214 11:12:30.684746     652 log.go:172] (0xc0006c4420) (0xc0007426e0) Stream removed, broadcasting: 3\nI0214 11:12:30.685111     652 log.go:172] (0xc000742640) (1) Data frame sent\nI0214 11:12:30.685272     652 log.go:172] (0xc0006c4420) (0xc000742640) Stream removed, broadcasting: 1\nI0214 11:12:30.685373     652 log.go:172] (0xc0006c4420) Go away received\nI0214 11:12:30.686085     652 log.go:172] (0xc0006c4420) (0xc000742640) Stream removed, broadcasting: 1\nI0214 11:12:30.686108     652 log.go:172] (0xc0006c4420) (0xc0007426e0) Stream removed, broadcasting: 3\nI0214 11:12:30.686116     652 log.go:172] (0xc0006c4420) (0xc0005f2c80) Stream removed, broadcasting: 5\n"
Feb 14 11:12:30.698: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 14 11:12:30.698: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 14 11:12:30.711: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 14 11:12:40.729: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 14 11:12:40.729: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 11:12:40.791: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 14 11:12:40.792: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:20 +0000 UTC  }]
Feb 14 11:12:40.792: INFO: 
Feb 14 11:12:40.792: INFO: StatefulSet ss has not reached scale 3, at 1
Feb 14 11:12:42.008: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.961407404s
Feb 14 11:12:43.024: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.744610801s
Feb 14 11:12:44.051: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.728688432s
Feb 14 11:12:45.087: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.702224511s
Feb 14 11:12:46.116: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.665698427s
Feb 14 11:12:47.215: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.637078435s
Feb 14 11:12:48.752: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.53768114s
Feb 14 11:12:49.769: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.00107089s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-vzf64
Feb 14 11:12:50.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:12:52.061: INFO: stderr: "I0214 11:12:51.073607     674 log.go:172] (0xc000704370) (0xc0007c6640) Create stream\nI0214 11:12:51.073973     674 log.go:172] (0xc000704370) (0xc0007c6640) Stream added, broadcasting: 1\nI0214 11:12:51.083300     674 log.go:172] (0xc000704370) Reply frame received for 1\nI0214 11:12:51.083441     674 log.go:172] (0xc000704370) (0xc0005e4c80) Create stream\nI0214 11:12:51.083500     674 log.go:172] (0xc000704370) (0xc0005e4c80) Stream added, broadcasting: 3\nI0214 11:12:51.084730     674 log.go:172] (0xc000704370) Reply frame received for 3\nI0214 11:12:51.084792     674 log.go:172] (0xc000704370) (0xc00075a000) Create stream\nI0214 11:12:51.084822     674 log.go:172] (0xc000704370) (0xc00075a000) Stream added, broadcasting: 5\nI0214 11:12:51.086358     674 log.go:172] (0xc000704370) Reply frame received for 5\nI0214 11:12:51.786873     674 log.go:172] (0xc000704370) Data frame received for 3\nI0214 11:12:51.786999     674 log.go:172] (0xc0005e4c80) (3) Data frame handling\nI0214 11:12:51.787036     674 log.go:172] (0xc0005e4c80) (3) Data frame sent\nI0214 11:12:52.038973     674 log.go:172] (0xc000704370) Data frame received for 1\nI0214 11:12:52.039169     674 log.go:172] (0xc000704370) (0xc00075a000) Stream removed, broadcasting: 5\nI0214 11:12:52.039357     674 log.go:172] (0xc0007c6640) (1) Data frame handling\nI0214 11:12:52.039393     674 log.go:172] (0xc0007c6640) (1) Data frame sent\nI0214 11:12:52.039455     674 log.go:172] (0xc000704370) (0xc0005e4c80) Stream removed, broadcasting: 3\nI0214 11:12:52.039515     674 log.go:172] (0xc000704370) (0xc0007c6640) Stream removed, broadcasting: 1\nI0214 11:12:52.039558     674 log.go:172] (0xc000704370) Go away received\nI0214 11:12:52.040451     674 log.go:172] (0xc000704370) (0xc0007c6640) Stream removed, broadcasting: 1\nI0214 11:12:52.040470     674 log.go:172] (0xc000704370) (0xc0005e4c80) Stream removed, broadcasting: 3\nI0214 11:12:52.040483     674 log.go:172] (0xc000704370) (0xc00075a000) Stream removed, broadcasting: 5\n"
Feb 14 11:12:52.061: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 14 11:12:52.061: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 14 11:12:52.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:12:53.033: INFO: stderr: "I0214 11:12:52.624782     696 log.go:172] (0xc000744c60) (0xc000711860) Create stream\nI0214 11:12:52.625043     696 log.go:172] (0xc000744c60) (0xc000711860) Stream added, broadcasting: 1\nI0214 11:12:52.651420     696 log.go:172] (0xc000744c60) Reply frame received for 1\nI0214 11:12:52.651605     696 log.go:172] (0xc000744c60) (0xc000710be0) Create stream\nI0214 11:12:52.651618     696 log.go:172] (0xc000744c60) (0xc000710be0) Stream added, broadcasting: 3\nI0214 11:12:52.653777     696 log.go:172] (0xc000744c60) Reply frame received for 3\nI0214 11:12:52.653836     696 log.go:172] (0xc000744c60) (0xc000710d20) Create stream\nI0214 11:12:52.653851     696 log.go:172] (0xc000744c60) (0xc000710d20) Stream added, broadcasting: 5\nI0214 11:12:52.655392     696 log.go:172] (0xc000744c60) Reply frame received for 5\nI0214 11:12:52.818668     696 log.go:172] (0xc000744c60) Data frame received for 5\nI0214 11:12:52.818834     696 log.go:172] (0xc000710d20) (5) Data frame handling\nI0214 11:12:52.818881     696 log.go:172] (0xc000710d20) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0214 11:12:52.818930     696 log.go:172] (0xc000744c60) Data frame received for 3\nI0214 11:12:52.818993     696 log.go:172] (0xc000710be0) (3) Data frame handling\nI0214 11:12:52.819029     696 log.go:172] (0xc000710be0) (3) Data frame sent\nI0214 11:12:53.021388     696 log.go:172] (0xc000744c60) (0xc000710be0) Stream removed, broadcasting: 3\nI0214 11:12:53.021587     696 log.go:172] (0xc000744c60) Data frame received for 1\nI0214 11:12:53.021707     696 log.go:172] (0xc000744c60) (0xc000710d20) Stream removed, broadcasting: 5\nI0214 11:12:53.021785     696 log.go:172] (0xc000711860) (1) Data frame handling\nI0214 11:12:53.021814     696 log.go:172] (0xc000711860) (1) Data frame sent\nI0214 11:12:53.021823     696 log.go:172] (0xc000744c60) (0xc000711860) Stream removed, broadcasting: 1\nI0214 11:12:53.021837     696 log.go:172] (0xc000744c60) Go away received\nI0214 11:12:53.022169     696 log.go:172] (0xc000744c60) (0xc000711860) Stream removed, broadcasting: 1\nI0214 11:12:53.022181     696 log.go:172] (0xc000744c60) (0xc000710be0) Stream removed, broadcasting: 3\nI0214 11:12:53.022189     696 log.go:172] (0xc000744c60) (0xc000710d20) Stream removed, broadcasting: 5\n"
Feb 14 11:12:53.033: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 14 11:12:53.033: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 14 11:12:53.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:12:53.402: INFO: stderr: "I0214 11:12:53.176195     717 log.go:172] (0xc0006de2c0) (0xc00065b2c0) Create stream\nI0214 11:12:53.176439     717 log.go:172] (0xc0006de2c0) (0xc00065b2c0) Stream added, broadcasting: 1\nI0214 11:12:53.182019     717 log.go:172] (0xc0006de2c0) Reply frame received for 1\nI0214 11:12:53.182122     717 log.go:172] (0xc0006de2c0) (0xc00039e000) Create stream\nI0214 11:12:53.182136     717 log.go:172] (0xc0006de2c0) (0xc00039e000) Stream added, broadcasting: 3\nI0214 11:12:53.183510     717 log.go:172] (0xc0006de2c0) Reply frame received for 3\nI0214 11:12:53.183568     717 log.go:172] (0xc0006de2c0) (0xc00065b360) Create stream\nI0214 11:12:53.183573     717 log.go:172] (0xc0006de2c0) (0xc00065b360) Stream added, broadcasting: 5\nI0214 11:12:53.184523     717 log.go:172] (0xc0006de2c0) Reply frame received for 5\nI0214 11:12:53.296878     717 log.go:172] (0xc0006de2c0) Data frame received for 3\nI0214 11:12:53.297102     717 log.go:172] (0xc00039e000) (3) Data frame handling\nI0214 11:12:53.297138     717 log.go:172] (0xc00039e000) (3) Data frame sent\nI0214 11:12:53.297188     717 log.go:172] (0xc0006de2c0) Data frame received for 5\nI0214 11:12:53.297199     717 log.go:172] (0xc00065b360) (5) Data frame handling\nI0214 11:12:53.297210     717 log.go:172] (0xc00065b360) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0214 11:12:53.394175     717 log.go:172] (0xc0006de2c0) Data frame received for 1\nI0214 11:12:53.394305     717 log.go:172] (0xc00065b2c0) (1) Data frame handling\nI0214 11:12:53.394335     717 log.go:172] (0xc00065b2c0) (1) Data frame sent\nI0214 11:12:53.394684     717 log.go:172] (0xc0006de2c0) (0xc00065b360) Stream removed, broadcasting: 5\nI0214 11:12:53.394778     717 log.go:172] (0xc0006de2c0) (0xc00065b2c0) Stream removed, broadcasting: 1\nI0214 11:12:53.395086     717 log.go:172] (0xc0006de2c0) (0xc00039e000) Stream removed, broadcasting: 3\nI0214 11:12:53.395124     717 log.go:172] (0xc0006de2c0) (0xc00065b2c0) Stream removed, broadcasting: 1\nI0214 11:12:53.395134     717 log.go:172] (0xc0006de2c0) (0xc00039e000) Stream removed, broadcasting: 3\nI0214 11:12:53.395141     717 log.go:172] (0xc0006de2c0) (0xc00065b360) Stream removed, broadcasting: 5\n"
Feb 14 11:12:53.403: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 14 11:12:53.403: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 14 11:12:53.414: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 11:12:53.414: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 11:12:53.414: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb 14 11:12:53.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 14 11:12:53.987: INFO: stderr: "I0214 11:12:53.593625     739 log.go:172] (0xc0006da370) (0xc0006f8640) Create stream\nI0214 11:12:53.593776     739 log.go:172] (0xc0006da370) (0xc0006f8640) Stream added, broadcasting: 1\nI0214 11:12:53.599032     739 log.go:172] (0xc0006da370) Reply frame received for 1\nI0214 11:12:53.599060     739 log.go:172] (0xc0006da370) (0xc000786d20) Create stream\nI0214 11:12:53.599069     739 log.go:172] (0xc0006da370) (0xc000786d20) Stream added, broadcasting: 3\nI0214 11:12:53.601724     739 log.go:172] (0xc0006da370) Reply frame received for 3\nI0214 11:12:53.601772     739 log.go:172] (0xc0006da370) (0xc0006f86e0) Create stream\nI0214 11:12:53.601792     739 log.go:172] (0xc0006da370) (0xc0006f86e0) Stream added, broadcasting: 5\nI0214 11:12:53.611686     739 log.go:172] (0xc0006da370) Reply frame received for 5\nI0214 11:12:53.768286     739 log.go:172] (0xc0006da370) Data frame received for 3\nI0214 11:12:53.768338     739 log.go:172] (0xc000786d20) (3) Data frame handling\nI0214 11:12:53.768357     739 log.go:172] (0xc000786d20) (3) Data frame sent\nI0214 11:12:53.980015     739 log.go:172] (0xc0006da370) (0xc000786d20) Stream removed, broadcasting: 3\nI0214 11:12:53.980251     739 log.go:172] (0xc0006da370) Data frame received for 1\nI0214 11:12:53.980262     739 log.go:172] (0xc0006f8640) (1) Data frame handling\nI0214 11:12:53.980276     739 log.go:172] (0xc0006f8640) (1) Data frame sent\nI0214 11:12:53.980321     739 log.go:172] (0xc0006da370) (0xc0006f8640) Stream removed, broadcasting: 1\nI0214 11:12:53.980391     739 log.go:172] (0xc0006da370) (0xc0006f86e0) Stream removed, broadcasting: 5\nI0214 11:12:53.980435     739 log.go:172] (0xc0006da370) Go away received\nI0214 11:12:53.980601     739 log.go:172] (0xc0006da370) (0xc0006f8640) Stream removed, broadcasting: 1\nI0214 11:12:53.980613     739 log.go:172] (0xc0006da370) (0xc000786d20) Stream removed, broadcasting: 3\nI0214 11:12:53.980619     739 log.go:172] (0xc0006da370) (0xc0006f86e0) Stream removed, broadcasting: 5\n"
Feb 14 11:12:53.988: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 14 11:12:53.988: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 14 11:12:53.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 14 11:12:54.515: INFO: stderr: "I0214 11:12:54.175030     761 log.go:172] (0xc000716370) (0xc000736640) Create stream\nI0214 11:12:54.175275     761 log.go:172] (0xc000716370) (0xc000736640) Stream added, broadcasting: 1\nI0214 11:12:54.179109     761 log.go:172] (0xc000716370) Reply frame received for 1\nI0214 11:12:54.179140     761 log.go:172] (0xc000716370) (0xc00057ce60) Create stream\nI0214 11:12:54.179150     761 log.go:172] (0xc000716370) (0xc00057ce60) Stream added, broadcasting: 3\nI0214 11:12:54.179972     761 log.go:172] (0xc000716370) Reply frame received for 3\nI0214 11:12:54.180012     761 log.go:172] (0xc000716370) (0xc0006c2000) Create stream\nI0214 11:12:54.180030     761 log.go:172] (0xc000716370) (0xc0006c2000) Stream added, broadcasting: 5\nI0214 11:12:54.180967     761 log.go:172] (0xc000716370) Reply frame received for 5\nI0214 11:12:54.304020     761 log.go:172] (0xc000716370) Data frame received for 3\nI0214 11:12:54.304166     761 log.go:172] (0xc00057ce60) (3) Data frame handling\nI0214 11:12:54.304211     761 log.go:172] (0xc00057ce60) (3) Data frame sent\nI0214 11:12:54.506605     761 log.go:172] (0xc000716370) Data frame received for 1\nI0214 11:12:54.506795     761 log.go:172] (0xc000716370) (0xc00057ce60) Stream removed, broadcasting: 3\nI0214 11:12:54.506862     761 log.go:172] (0xc000736640) (1) Data frame handling\nI0214 11:12:54.506885     761 log.go:172] (0xc000736640) (1) Data frame sent\nI0214 11:12:54.506919     761 log.go:172] (0xc000716370) (0xc0006c2000) Stream removed, broadcasting: 5\nI0214 11:12:54.506942     761 log.go:172] (0xc000716370) (0xc000736640) Stream removed, broadcasting: 1\nI0214 11:12:54.506954     761 log.go:172] (0xc000716370) Go away received\nI0214 11:12:54.507695     761 log.go:172] (0xc000716370) (0xc000736640) Stream removed, broadcasting: 1\nI0214 11:12:54.507718     761 log.go:172] (0xc000716370) (0xc00057ce60) Stream removed, broadcasting: 3\nI0214 11:12:54.507733     761 log.go:172] (0xc000716370) (0xc0006c2000) Stream removed, broadcasting: 5\n"
Feb 14 11:12:54.515: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 14 11:12:54.515: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 14 11:12:54.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 14 11:12:54.957: INFO: stderr: "I0214 11:12:54.707701     783 log.go:172] (0xc000714370) (0xc000734640) Create stream\nI0214 11:12:54.707983     783 log.go:172] (0xc000714370) (0xc000734640) Stream added, broadcasting: 1\nI0214 11:12:54.711667     783 log.go:172] (0xc000714370) Reply frame received for 1\nI0214 11:12:54.711705     783 log.go:172] (0xc000714370) (0xc0005f6c80) Create stream\nI0214 11:12:54.711713     783 log.go:172] (0xc000714370) (0xc0005f6c80) Stream added, broadcasting: 3\nI0214 11:12:54.712627     783 log.go:172] (0xc000714370) Reply frame received for 3\nI0214 11:12:54.712648     783 log.go:172] (0xc000714370) (0xc00072a000) Create stream\nI0214 11:12:54.712658     783 log.go:172] (0xc000714370) (0xc00072a000) Stream added, broadcasting: 5\nI0214 11:12:54.713276     783 log.go:172] (0xc000714370) Reply frame received for 5\nI0214 11:12:54.826096     783 log.go:172] (0xc000714370) Data frame received for 3\nI0214 11:12:54.826155     783 log.go:172] (0xc0005f6c80) (3) Data frame handling\nI0214 11:12:54.826169     783 log.go:172] (0xc0005f6c80) (3) Data frame sent\nI0214 11:12:54.947595     783 log.go:172] (0xc000714370) Data frame received for 1\nI0214 11:12:54.947752     783 log.go:172] (0xc000734640) (1) Data frame handling\nI0214 11:12:54.947802     783 log.go:172] (0xc000734640) (1) Data frame sent\nI0214 11:12:54.947834     783 log.go:172] (0xc000714370) (0xc000734640) Stream removed, broadcasting: 1\nI0214 11:12:54.948858     783 log.go:172] (0xc000714370) (0xc0005f6c80) Stream removed, broadcasting: 3\nI0214 11:12:54.948936     783 log.go:172] (0xc000714370) (0xc00072a000) Stream removed, broadcasting: 5\nI0214 11:12:54.949034     783 log.go:172] (0xc000714370) Go away received\nI0214 11:12:54.949090     783 log.go:172] (0xc000714370) (0xc000734640) Stream removed, broadcasting: 1\nI0214 11:12:54.949110     783 log.go:172] (0xc000714370) (0xc0005f6c80) Stream removed, broadcasting: 3\nI0214 11:12:54.949120     783 log.go:172] (0xc000714370) (0xc00072a000) Stream removed, broadcasting: 5\n"
Feb 14 11:12:54.957: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 14 11:12:54.957: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 14 11:12:54.957: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 11:12:54.966: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb 14 11:13:04.993: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 14 11:13:04.993: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 14 11:13:04.993: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 14 11:13:05.042: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 14 11:13:05.042: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:20 +0000 UTC  }]
Feb 14 11:13:05.042: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  }]
Feb 14 11:13:05.042: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  }]
Feb 14 11:13:05.042: INFO: 
Feb 14 11:13:05.042: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 14 11:13:06.086: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 14 11:13:06.086: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:20 +0000 UTC  }]
Feb 14 11:13:06.087: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  }]
Feb 14 11:13:06.087: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  }]
Feb 14 11:13:06.087: INFO: 
Feb 14 11:13:06.087: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 14 11:13:07.240: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 14 11:13:07.240: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:20 +0000 UTC  }]
Feb 14 11:13:07.240: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  }]
Feb 14 11:13:07.240: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  }]
Feb 14 11:13:07.240: INFO: 
Feb 14 11:13:07.240: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 14 11:13:08.275: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 14 11:13:08.275: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:20 +0000 UTC  }]
Feb 14 11:13:08.275: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  }]
Feb 14 11:13:08.275: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  }]
Feb 14 11:13:08.275: INFO: 
Feb 14 11:13:08.275: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 14 11:13:09.769: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 14 11:13:09.769: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:20 +0000 UTC  }]
Feb 14 11:13:09.770: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  }]
Feb 14 11:13:09.770: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  }]
Feb 14 11:13:09.770: INFO: 
Feb 14 11:13:09.770: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 14 11:13:10.782: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 14 11:13:10.782: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:20 +0000 UTC  }]
Feb 14 11:13:10.783: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  }]
Feb 14 11:13:10.783: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  }]
Feb 14 11:13:10.783: INFO: 
Feb 14 11:13:10.783: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 14 11:13:12.090: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 14 11:13:12.090: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:20 +0000 UTC  }]
Feb 14 11:13:12.090: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  }]
Feb 14 11:13:12.090: INFO: ss-2  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  }]
Feb 14 11:13:12.090: INFO: 
Feb 14 11:13:12.090: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 14 11:13:13.169: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 14 11:13:13.170: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:20 +0000 UTC  }]
Feb 14 11:13:13.170: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  }]
Feb 14 11:13:13.170: INFO: ss-2  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  }]
Feb 14 11:13:13.170: INFO: 
Feb 14 11:13:13.170: INFO: StatefulSet ss has not reached scale 0, at 3
Feb 14 11:13:14.186: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Feb 14 11:13:14.186: INFO: ss-1  hunter-server-hu5at5svl7ps  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:12:40 +0000 UTC  }]
Feb 14 11:13:14.187: INFO: 
Feb 14 11:13:14.187: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-vzf64
Feb 14 11:13:15.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:13:15.556: INFO: rc: 1
Feb 14 11:13:15.557: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00210f0e0 exit status 1   true [0xc000bacae0 0xc000bacd18 0xc000baceb0] [0xc000bacae0 0xc000bacd18 0xc000baceb0] [0xc000bacca0 0xc000bace58] [0x935700 0x935700] 0xc002258d20 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Feb 14 11:13:25.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:13:25.766: INFO: rc: 1
Feb 14 11:13:25.766: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0012f71a0 exit status 1   true [0xc00037af50 0xc00037b148 0xc00037b2a0] [0xc00037af50 0xc00037b148 0xc00037b2a0] [0xc00037b0b8 0xc00037b268] [0x935700 0x935700] 0xc002052780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:13:35.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:13:35.961: INFO: rc: 1
Feb 14 11:13:35.961: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001886120 exit status 1   true [0xc00011e0c8 0xc00011e158 0xc00011e228] [0xc00011e0c8 0xc00011e158 0xc00011e228] [0xc00011e148 0xc00011e1d8] [0x935700 0x935700] 0xc0014361e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:13:45.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:13:46.156: INFO: rc: 1
Feb 14 11:13:46.156: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00210f410 exit status 1   true [0xc000bad020 0xc000bad218 0xc000bad2a0] [0xc000bad020 0xc000bad218 0xc000bad2a0] [0xc000bad178 0xc000bad290] [0x935700 0x935700] 0xc002258fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:13:56.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:13:56.358: INFO: rc: 1
Feb 14 11:13:56.358: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001a76150 exit status 1   true [0xc001314000 0xc001314018 0xc001314030] [0xc001314000 0xc001314018 0xc001314030] [0xc001314010 0xc001314028] [0x935700 0x935700] 0xc0021761e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:14:06.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:14:06.605: INFO: rc: 1
Feb 14 11:14:06.606: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00210f530 exit status 1   true [0xc000bad2c8 0xc000bad330 0xc000bad3d8] [0xc000bad2c8 0xc000bad330 0xc000bad3d8] [0xc000bad2e8 0xc000bad390] [0x935700 0x935700] 0xc002259260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:14:16.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:14:16.781: INFO: rc: 1
Feb 14 11:14:16.782: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0012f7320 exit status 1   true [0xc00037b2c0 0xc00037b360 0xc00037b3e0] [0xc00037b2c0 0xc00037b360 0xc00037b3e0] [0xc00037b308 0xc00037b3b0] [0x935700 0x935700] 0xc002052a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:14:26.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:14:26.975: INFO: rc: 1
Feb 14 11:14:26.975: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001a762d0 exit status 1   true [0xc001314038 0xc001314050 0xc001314068] [0xc001314038 0xc001314050 0xc001314068] [0xc001314048 0xc001314060] [0x935700 0x935700] 0xc002176480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:14:36.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:14:37.154: INFO: rc: 1
Feb 14 11:14:37.154: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0012f7440 exit status 1   true [0xc00037b418 0xc00037b480 0xc00037b508] [0xc00037b418 0xc00037b480 0xc00037b508] [0xc00037b478 0xc00037b4e0] [0x935700 0x935700] 0xc002053260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:14:47.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:14:47.265: INFO: rc: 1
Feb 14 11:14:47.265: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0012f7590 exit status 1   true [0xc00037b528 0xc00037b578 0xc00037b600] [0xc00037b528 0xc00037b578 0xc00037b600] [0xc00037b568 0xc00037b5e8] [0x935700 0x935700] 0xc002053ce0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:14:57.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:14:57.438: INFO: rc: 1
Feb 14 11:14:57.438: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001886270 exit status 1   true [0xc00011e268 0xc00011e2c0 0xc00011e350] [0xc00011e268 0xc00011e2c0 0xc00011e350] [0xc00011e2b0 0xc00011e320] [0x935700 0x935700] 0xc001436540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:15:07.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:15:07.611: INFO: rc: 1
Feb 14 11:15:07.612: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0012f7740 exit status 1   true [0xc00037b648 0xc00037b690 0xc00037b710] [0xc00037b648 0xc00037b690 0xc00037b710] [0xc00037b680 0xc00037b6f8] [0x935700 0x935700] 0xc001e82480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:15:17.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:15:17.787: INFO: rc: 1
Feb 14 11:15:17.787: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001886150 exit status 1   true [0xc0000e81b8 0xc00011e148 0xc00011e1d8] [0xc0000e81b8 0xc00011e148 0xc00011e1d8] [0xc00011e138 0xc00011e160] [0x935700 0x935700] 0xc0020523c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:15:27.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:15:28.003: INFO: rc: 1
Feb 14 11:15:28.003: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0018862a0 exit status 1   true [0xc00011e228 0xc00011e2b0 0xc00011e320] [0xc00011e228 0xc00011e2b0 0xc00011e320] [0xc00011e280 0xc00011e2d0] [0x935700 0x935700] 0xc002052660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:15:38.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:15:38.188: INFO: rc: 1
Feb 14 11:15:38.188: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00210e150 exit status 1   true [0xc001314000 0xc001314018 0xc001314030] [0xc001314000 0xc001314018 0xc001314030] [0xc001314010 0xc001314028] [0x935700 0x935700] 0xc0021761e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:15:48.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:15:48.324: INFO: rc: 1
Feb 14 11:15:48.324: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001a76120 exit status 1   true [0xc000bac008 0xc000bac198 0xc000bac3d0] [0xc000bac008 0xc000bac198 0xc000bac3d0] [0xc000bac168 0xc000bac3c0] [0x935700 0x935700] 0xc0014361e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:15:58.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:15:58.529: INFO: rc: 1
Feb 14 11:15:58.529: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001886420 exit status 1   true [0xc00011e350 0xc00011e398 0xc00011e448] [0xc00011e350 0xc00011e398 0xc00011e448] [0xc00011e368 0xc00011e418] [0x935700 0x935700] 0xc002052900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:16:08.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:16:08.717: INFO: rc: 1
Feb 14 11:16:08.717: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00210e2a0 exit status 1   true [0xc001314038 0xc001314050 0xc001314068] [0xc001314038 0xc001314050 0xc001314068] [0xc001314048 0xc001314060] [0x935700 0x935700] 0xc002176480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:16:18.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:16:18.874: INFO: rc: 1
Feb 14 11:16:18.874: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001886540 exit status 1   true [0xc00011e468 0xc00011e508 0xc00011e540] [0xc00011e468 0xc00011e508 0xc00011e540] [0xc00011e4b8 0xc00011e530] [0x935700 0x935700] 0xc002052f60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:16:28.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:16:29.022: INFO: rc: 1
Feb 14 11:16:29.022: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001a76270 exit status 1   true [0xc000bac3f8 0xc000bac9a0 0xc000bacae0] [0xc000bac3f8 0xc000bac9a0 0xc000bacae0] [0xc000bac8e0 0xc000bacab8] [0x935700 0x935700] 0xc001436540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:16:39.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:16:39.204: INFO: rc: 1
Feb 14 11:16:39.204: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001a763c0 exit status 1   true [0xc000bacbe0 0xc000bacd48 0xc000bad020] [0xc000bacbe0 0xc000bacd48 0xc000bad020] [0xc000bacd18 0xc000baceb0] [0x935700 0x935700] 0xc0014368a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:16:49.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:16:49.310: INFO: rc: 1
Feb 14 11:16:49.310: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc00210e420 exit status 1   true [0xc001314070 0xc001314088 0xc0013140a0] [0xc001314070 0xc001314088 0xc0013140a0] [0xc001314080 0xc001314098] [0x935700 0x935700] 0xc002176720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:16:59.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:16:59.498: INFO: rc: 1
Feb 14 11:16:59.498: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001886780 exit status 1   true [0xc00011e580 0xc00011e600 0xc00011e688] [0xc00011e580 0xc00011e600 0xc00011e688] [0xc00011e5c0 0xc00011e680] [0x935700 0x935700] 0xc002053860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:17:09.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:17:09.658: INFO: rc: 1
Feb 14 11:17:09.659: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001a76510 exit status 1   true [0xc000bad088 0xc000bad248 0xc000bad2c8] [0xc000bad088 0xc000bad248 0xc000bad2c8] [0xc000bad218 0xc000bad2a0] [0x935700 0x935700] 0xc001436ba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:17:19.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:17:19.876: INFO: rc: 1
Feb 14 11:17:19.876: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001a76150 exit status 1   true [0xc00000e010 0xc001314010 0xc001314028] [0xc00000e010 0xc001314010 0xc001314028] [0xc001314008 0xc001314020] [0x935700 0x935700] 0xc0014361e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:17:29.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:17:30.065: INFO: rc: 1
Feb 14 11:17:30.065: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0012f6120 exit status 1   true [0xc000bac008 0xc000bac198 0xc000bac3d0] [0xc000bac008 0xc000bac198 0xc000bac3d0] [0xc000bac168 0xc000bac3c0] [0x935700 0x935700] 0xc0021761e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:17:40.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:17:40.388: INFO: rc: 1
Feb 14 11:17:40.389: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0012f6270 exit status 1   true [0xc000bac3f8 0xc000bac9a0 0xc000bacae0] [0xc000bac3f8 0xc000bac9a0 0xc000bacae0] [0xc000bac8e0 0xc000bacab8] [0x935700 0x935700] 0xc002176480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:17:50.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:17:50.516: INFO: rc: 1
Feb 14 11:17:50.516: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0012f6390 exit status 1   true [0xc000bacbe0 0xc000bacd48 0xc000bad020] [0xc000bacbe0 0xc000bacd48 0xc000bad020] [0xc000bacd18 0xc000baceb0] [0x935700 0x935700] 0xc002176720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:18:00.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:18:00.659: INFO: rc: 1
Feb 14 11:18:00.660: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001a76300 exit status 1   true [0xc001314030 0xc001314048 0xc001314060] [0xc001314030 0xc001314048 0xc001314060] [0xc001314040 0xc001314058] [0x935700 0x935700] 0xc001436540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:18:10.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:18:11.320: INFO: rc: 1
Feb 14 11:18:11.320: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001886120 exit status 1   true [0xc00011e0c8 0xc00011e158 0xc00011e228] [0xc00011e0c8 0xc00011e158 0xc00011e228] [0xc00011e148 0xc00011e1d8] [0x935700 0x935700] 0xc0020523c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1

Feb 14 11:18:21.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-vzf64 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:18:21.512: INFO: rc: 1
Feb 14 11:18:21.512: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: 
Feb 14 11:18:21.512: INFO: Scaling statefulset ss to 0
Feb 14 11:18:21.535: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 14 11:18:21.539: INFO: Deleting all statefulset in ns e2e-tests-statefulset-vzf64
Feb 14 11:18:21.573: INFO: Scaling statefulset ss to 0
Feb 14 11:18:21.596: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 11:18:21.600: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:18:21.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-vzf64" for this suite.
Feb 14 11:18:29.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:18:29.770: INFO: namespace: e2e-tests-statefulset-vzf64, resource: bindings, ignored listing per whitelist
Feb 14 11:18:29.920: INFO: namespace e2e-tests-statefulset-vzf64 deletion completed in 8.278089056s

• [SLOW TEST:370.166 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:18:29.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:19:30.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-x44pl" for this suite.
Feb 14 11:20:04.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:20:04.607: INFO: namespace: e2e-tests-container-probe-x44pl, resource: bindings, ignored listing per whitelist
Feb 14 11:20:04.654: INFO: namespace e2e-tests-container-probe-x44pl deletion completed in 34.421426973s

• [SLOW TEST:94.733 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:20:04.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 14 11:20:04.959: INFO: Waiting up to 5m0s for pod "pod-f318b1f5-4f1b-11ea-af88-0242ac110007" in namespace "e2e-tests-emptydir-nnjs8" to be "success or failure"
Feb 14 11:20:05.073: INFO: Pod "pod-f318b1f5-4f1b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 113.297283ms
Feb 14 11:20:07.084: INFO: Pod "pod-f318b1f5-4f1b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12484649s
Feb 14 11:20:09.100: INFO: Pod "pod-f318b1f5-4f1b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140565839s
Feb 14 11:20:11.127: INFO: Pod "pod-f318b1f5-4f1b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.167113162s
Feb 14 11:20:13.181: INFO: Pod "pod-f318b1f5-4f1b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.221868608s
Feb 14 11:20:15.194: INFO: Pod "pod-f318b1f5-4f1b-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.234121073s
STEP: Saw pod success
Feb 14 11:20:15.194: INFO: Pod "pod-f318b1f5-4f1b-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:20:15.198: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f318b1f5-4f1b-11ea-af88-0242ac110007 container test-container: 
STEP: delete the pod
Feb 14 11:20:15.289: INFO: Waiting for pod pod-f318b1f5-4f1b-11ea-af88-0242ac110007 to disappear
Feb 14 11:20:15.306: INFO: Pod pod-f318b1f5-4f1b-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:20:15.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-nnjs8" for this suite.
Feb 14 11:20:21.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:20:21.437: INFO: namespace: e2e-tests-emptydir-nnjs8, resource: bindings, ignored listing per whitelist
Feb 14 11:20:21.583: INFO: namespace e2e-tests-emptydir-nnjs8 deletion completed in 6.206070167s

• [SLOW TEST:16.929 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:20:21.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 14 11:20:22.097: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fd4e54c5-4f1b-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-47c9j" to be "success or failure"
Feb 14 11:20:22.113: INFO: Pod "downwardapi-volume-fd4e54c5-4f1b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.14524ms
Feb 14 11:20:24.207: INFO: Pod "downwardapi-volume-fd4e54c5-4f1b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109431861s
Feb 14 11:20:26.219: INFO: Pod "downwardapi-volume-fd4e54c5-4f1b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121430376s
Feb 14 11:20:28.320: INFO: Pod "downwardapi-volume-fd4e54c5-4f1b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.222969783s
Feb 14 11:20:30.580: INFO: Pod "downwardapi-volume-fd4e54c5-4f1b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.482170919s
Feb 14 11:20:32.607: INFO: Pod "downwardapi-volume-fd4e54c5-4f1b-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.509842087s
STEP: Saw pod success
Feb 14 11:20:32.607: INFO: Pod "downwardapi-volume-fd4e54c5-4f1b-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:20:32.621: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-fd4e54c5-4f1b-11ea-af88-0242ac110007 container client-container: 
STEP: delete the pod
Feb 14 11:20:32.745: INFO: Waiting for pod downwardapi-volume-fd4e54c5-4f1b-11ea-af88-0242ac110007 to disappear
Feb 14 11:20:32.751: INFO: Pod downwardapi-volume-fd4e54c5-4f1b-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:20:32.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-47c9j" for this suite.
Feb 14 11:20:38.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:20:38.938: INFO: namespace: e2e-tests-projected-47c9j, resource: bindings, ignored listing per whitelist
Feb 14 11:20:38.988: INFO: namespace e2e-tests-projected-47c9j deletion completed in 6.233224616s

• [SLOW TEST:17.405 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:20:38.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-c57h8/secret-test-078abd13-4f1c-11ea-af88-0242ac110007
STEP: Creating a pod to test consume secrets
Feb 14 11:20:39.369: INFO: Waiting up to 5m0s for pod "pod-configmaps-07999ebb-4f1c-11ea-af88-0242ac110007" in namespace "e2e-tests-secrets-c57h8" to be "success or failure"
Feb 14 11:20:39.423: INFO: Pod "pod-configmaps-07999ebb-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 54.601405ms
Feb 14 11:20:41.438: INFO: Pod "pod-configmaps-07999ebb-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069521718s
Feb 14 11:20:43.458: INFO: Pod "pod-configmaps-07999ebb-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088727945s
Feb 14 11:20:45.631: INFO: Pod "pod-configmaps-07999ebb-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.26212483s
Feb 14 11:20:47.649: INFO: Pod "pod-configmaps-07999ebb-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.279964204s
Feb 14 11:20:49.667: INFO: Pod "pod-configmaps-07999ebb-4f1c-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.298382049s
STEP: Saw pod success
Feb 14 11:20:49.667: INFO: Pod "pod-configmaps-07999ebb-4f1c-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:20:49.672: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-07999ebb-4f1c-11ea-af88-0242ac110007 container env-test: 
STEP: delete the pod
Feb 14 11:20:50.536: INFO: Waiting for pod pod-configmaps-07999ebb-4f1c-11ea-af88-0242ac110007 to disappear
Feb 14 11:20:50.552: INFO: Pod pod-configmaps-07999ebb-4f1c-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:20:50.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-c57h8" for this suite.
Feb 14 11:20:56.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:20:56.789: INFO: namespace: e2e-tests-secrets-c57h8, resource: bindings, ignored listing per whitelist
Feb 14 11:20:56.965: INFO: namespace e2e-tests-secrets-c57h8 deletion completed in 6.309602443s

• [SLOW TEST:17.977 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:20:56.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 14 11:20:57.223: INFO: Waiting up to 5m0s for pod "pod-123ebd7a-4f1c-11ea-af88-0242ac110007" in namespace "e2e-tests-emptydir-l9vmb" to be "success or failure"
Feb 14 11:20:57.342: INFO: Pod "pod-123ebd7a-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 119.292222ms
Feb 14 11:20:59.518: INFO: Pod "pod-123ebd7a-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294399714s
Feb 14 11:21:01.549: INFO: Pod "pod-123ebd7a-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326176839s
Feb 14 11:21:03.745: INFO: Pod "pod-123ebd7a-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.522356293s
Feb 14 11:21:05.761: INFO: Pod "pod-123ebd7a-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.538088513s
Feb 14 11:21:07.827: INFO: Pod "pod-123ebd7a-4f1c-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.603737455s
STEP: Saw pod success
Feb 14 11:21:07.827: INFO: Pod "pod-123ebd7a-4f1c-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:21:07.851: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-123ebd7a-4f1c-11ea-af88-0242ac110007 container test-container: 
STEP: delete the pod
Feb 14 11:21:08.288: INFO: Waiting for pod pod-123ebd7a-4f1c-11ea-af88-0242ac110007 to disappear
Feb 14 11:21:08.303: INFO: Pod pod-123ebd7a-4f1c-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:21:08.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-l9vmb" for this suite.
Feb 14 11:21:16.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:21:16.576: INFO: namespace: e2e-tests-emptydir-l9vmb, resource: bindings, ignored listing per whitelist
Feb 14 11:21:16.664: INFO: namespace e2e-tests-emptydir-l9vmb deletion completed in 8.352377256s

• [SLOW TEST:19.697 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:21:16.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 14 11:21:16.792: INFO: Waiting up to 5m0s for pod "downward-api-1dea38c8-4f1c-11ea-af88-0242ac110007" in namespace "e2e-tests-downward-api-vqhzn" to be "success or failure"
Feb 14 11:21:16.802: INFO: Pod "downward-api-1dea38c8-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.983039ms
Feb 14 11:21:18.813: INFO: Pod "downward-api-1dea38c8-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020916351s
Feb 14 11:21:20.839: INFO: Pod "downward-api-1dea38c8-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046924135s
Feb 14 11:21:22.948: INFO: Pod "downward-api-1dea38c8-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.155350852s
Feb 14 11:21:25.334: INFO: Pod "downward-api-1dea38c8-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.541504512s
Feb 14 11:21:27.362: INFO: Pod "downward-api-1dea38c8-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.569533564s
Feb 14 11:21:29.376: INFO: Pod "downward-api-1dea38c8-4f1c-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.583215446s
STEP: Saw pod success
Feb 14 11:21:29.376: INFO: Pod "downward-api-1dea38c8-4f1c-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:21:29.380: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-1dea38c8-4f1c-11ea-af88-0242ac110007 container dapi-container: 
STEP: delete the pod
Feb 14 11:21:30.357: INFO: Waiting for pod downward-api-1dea38c8-4f1c-11ea-af88-0242ac110007 to disappear
Feb 14 11:21:30.736: INFO: Pod downward-api-1dea38c8-4f1c-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:21:30.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-vqhzn" for this suite.
Feb 14 11:21:36.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:21:36.935: INFO: namespace: e2e-tests-downward-api-vqhzn, resource: bindings, ignored listing per whitelist
Feb 14 11:21:37.004: INFO: namespace e2e-tests-downward-api-vqhzn deletion completed in 6.241696005s

• [SLOW TEST:20.339 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:21:37.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-pvrd7
Feb 14 11:21:49.261: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-pvrd7
STEP: checking the pod's current state and verifying that restartCount is present
Feb 14 11:21:49.267: INFO: Initial restart count of pod liveness-http is 0
Feb 14 11:22:07.965: INFO: Restart count of pod e2e-tests-container-probe-pvrd7/liveness-http is now 1 (18.698200766s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:22:07.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-pvrd7" for this suite.
Feb 14 11:22:14.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:22:14.096: INFO: namespace: e2e-tests-container-probe-pvrd7, resource: bindings, ignored listing per whitelist
Feb 14 11:22:14.204: INFO: namespace e2e-tests-container-probe-pvrd7 deletion completed in 6.180751273s

• [SLOW TEST:37.200 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:22:14.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-rdsc4 in namespace e2e-tests-proxy-q9l4g
I0214 11:22:14.426095       8 runners.go:184] Created replication controller with name: proxy-service-rdsc4, namespace: e2e-tests-proxy-q9l4g, replica count: 1
I0214 11:22:15.477598       8 runners.go:184] proxy-service-rdsc4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 11:22:16.478335       8 runners.go:184] proxy-service-rdsc4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 11:22:17.479495       8 runners.go:184] proxy-service-rdsc4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 11:22:18.480305       8 runners.go:184] proxy-service-rdsc4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 11:22:19.480770       8 runners.go:184] proxy-service-rdsc4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 11:22:20.481332       8 runners.go:184] proxy-service-rdsc4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 11:22:21.481841       8 runners.go:184] proxy-service-rdsc4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 11:22:22.482479       8 runners.go:184] proxy-service-rdsc4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 11:22:23.482961       8 runners.go:184] proxy-service-rdsc4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0214 11:22:24.483539       8 runners.go:184] proxy-service-rdsc4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0214 11:22:25.484066       8 runners.go:184] proxy-service-rdsc4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0214 11:22:26.484570       8 runners.go:184] proxy-service-rdsc4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 14 11:22:26.516: INFO: setup took 12.1442423s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 14 11:22:26.569: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-q9l4g/pods/proxy-service-rdsc4-jzntr/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 14 11:22:39.411: INFO: Waiting up to 5m0s for pod "pod-4f1d6057-4f1c-11ea-af88-0242ac110007" in namespace "e2e-tests-emptydir-dzdbg" to be "success or failure"
Feb 14 11:22:39.447: INFO: Pod "pod-4f1d6057-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 35.766202ms
Feb 14 11:22:41.761: INFO: Pod "pod-4f1d6057-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.34917305s
Feb 14 11:22:43.770: INFO: Pod "pod-4f1d6057-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.35835337s
Feb 14 11:22:45.791: INFO: Pod "pod-4f1d6057-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.379216982s
Feb 14 11:22:47.813: INFO: Pod "pod-4f1d6057-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.401174594s
Feb 14 11:22:49.851: INFO: Pod "pod-4f1d6057-4f1c-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.438984359s
STEP: Saw pod success
Feb 14 11:22:49.851: INFO: Pod "pod-4f1d6057-4f1c-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:22:49.868: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4f1d6057-4f1c-11ea-af88-0242ac110007 container test-container: 
STEP: delete the pod
Feb 14 11:22:50.110: INFO: Waiting for pod pod-4f1d6057-4f1c-11ea-af88-0242ac110007 to disappear
Feb 14 11:22:50.117: INFO: Pod pod-4f1d6057-4f1c-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:22:50.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-dzdbg" for this suite.
Feb 14 11:22:56.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:22:56.309: INFO: namespace: e2e-tests-emptydir-dzdbg, resource: bindings, ignored listing per whitelist
Feb 14 11:22:56.523: INFO: namespace e2e-tests-emptydir-dzdbg deletion completed in 6.394649387s

• [SLOW TEST:17.358 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:22:56.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-77w5
STEP: Creating a pod to test atomic-volume-subpath
Feb 14 11:22:56.953: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-77w5" in namespace "e2e-tests-subpath-mcc8m" to be "success or failure"
Feb 14 11:22:57.170: INFO: Pod "pod-subpath-test-configmap-77w5": Phase="Pending", Reason="", readiness=false. Elapsed: 216.988094ms
Feb 14 11:22:59.266: INFO: Pod "pod-subpath-test-configmap-77w5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313326012s
Feb 14 11:23:01.281: INFO: Pod "pod-subpath-test-configmap-77w5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328271524s
Feb 14 11:23:03.481: INFO: Pod "pod-subpath-test-configmap-77w5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.527890668s
Feb 14 11:23:05.491: INFO: Pod "pod-subpath-test-configmap-77w5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.538281123s
Feb 14 11:23:07.502: INFO: Pod "pod-subpath-test-configmap-77w5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.54917175s
Feb 14 11:23:09.515: INFO: Pod "pod-subpath-test-configmap-77w5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.561530469s
Feb 14 11:23:11.741: INFO: Pod "pod-subpath-test-configmap-77w5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.788073211s
Feb 14 11:23:13.766: INFO: Pod "pod-subpath-test-configmap-77w5": Phase="Running", Reason="", readiness=false. Elapsed: 16.812608029s
Feb 14 11:23:15.781: INFO: Pod "pod-subpath-test-configmap-77w5": Phase="Running", Reason="", readiness=false. Elapsed: 18.828318636s
Feb 14 11:23:17.842: INFO: Pod "pod-subpath-test-configmap-77w5": Phase="Running", Reason="", readiness=false. Elapsed: 20.888581792s
Feb 14 11:23:19.871: INFO: Pod "pod-subpath-test-configmap-77w5": Phase="Running", Reason="", readiness=false. Elapsed: 22.917604723s
Feb 14 11:23:21.896: INFO: Pod "pod-subpath-test-configmap-77w5": Phase="Running", Reason="", readiness=false. Elapsed: 24.942725757s
Feb 14 11:23:23.915: INFO: Pod "pod-subpath-test-configmap-77w5": Phase="Running", Reason="", readiness=false. Elapsed: 26.961519816s
Feb 14 11:23:25.931: INFO: Pod "pod-subpath-test-configmap-77w5": Phase="Running", Reason="", readiness=false. Elapsed: 28.977842707s
Feb 14 11:23:27.952: INFO: Pod "pod-subpath-test-configmap-77w5": Phase="Running", Reason="", readiness=false. Elapsed: 30.999081759s
Feb 14 11:23:29.967: INFO: Pod "pod-subpath-test-configmap-77w5": Phase="Running", Reason="", readiness=false. Elapsed: 33.013550981s
Feb 14 11:23:32.214: INFO: Pod "pod-subpath-test-configmap-77w5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.260641076s
STEP: Saw pod success
Feb 14 11:23:32.214: INFO: Pod "pod-subpath-test-configmap-77w5" satisfied condition "success or failure"
Feb 14 11:23:32.249: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-77w5 container test-container-subpath-configmap-77w5: 
STEP: delete the pod
Feb 14 11:23:32.423: INFO: Waiting for pod pod-subpath-test-configmap-77w5 to disappear
Feb 14 11:23:32.523: INFO: Pod pod-subpath-test-configmap-77w5 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-77w5
Feb 14 11:23:32.523: INFO: Deleting pod "pod-subpath-test-configmap-77w5" in namespace "e2e-tests-subpath-mcc8m"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:23:32.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-mcc8m" for this suite.
Feb 14 11:23:38.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:23:38.837: INFO: namespace: e2e-tests-subpath-mcc8m, resource: bindings, ignored listing per whitelist
Feb 14 11:23:38.878: INFO: namespace e2e-tests-subpath-mcc8m deletion completed in 6.304044663s

• [SLOW TEST:42.354 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:23:38.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 14 11:23:39.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Feb 14 11:23:39.205: INFO: stderr: ""
Feb 14 11:23:39.205: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Feb 14 11:23:39.216: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:23:39.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xvqqj" for this suite.
Feb 14 11:23:45.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:23:45.561: INFO: namespace: e2e-tests-kubectl-xvqqj, resource: bindings, ignored listing per whitelist
Feb 14 11:23:45.583: INFO: namespace e2e-tests-kubectl-xvqqj deletion completed in 6.35574643s

S [SKIPPING] [6.704 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Feb 14 11:23:39.216: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:23:45.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 14 11:23:45.786: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76b936ca-4f1c-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-4nxzz" to be "success or failure"
Feb 14 11:23:45.802: INFO: Pod "downwardapi-volume-76b936ca-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.534182ms
Feb 14 11:23:47.893: INFO: Pod "downwardapi-volume-76b936ca-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106699123s
Feb 14 11:23:49.903: INFO: Pod "downwardapi-volume-76b936ca-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.116678517s
Feb 14 11:23:51.919: INFO: Pod "downwardapi-volume-76b936ca-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132130491s
Feb 14 11:23:53.938: INFO: Pod "downwardapi-volume-76b936ca-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.151136035s
Feb 14 11:23:56.763: INFO: Pod "downwardapi-volume-76b936ca-4f1c-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.976848339s
STEP: Saw pod success
Feb 14 11:23:56.764: INFO: Pod "downwardapi-volume-76b936ca-4f1c-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:23:56.774: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-76b936ca-4f1c-11ea-af88-0242ac110007 container client-container: 
STEP: delete the pod
Feb 14 11:23:57.081: INFO: Waiting for pod downwardapi-volume-76b936ca-4f1c-11ea-af88-0242ac110007 to disappear
Feb 14 11:23:57.185: INFO: Pod downwardapi-volume-76b936ca-4f1c-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:23:57.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4nxzz" for this suite.
Feb 14 11:24:03.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:24:03.388: INFO: namespace: e2e-tests-projected-4nxzz, resource: bindings, ignored listing per whitelist
Feb 14 11:24:03.397: INFO: namespace e2e-tests-projected-4nxzz deletion completed in 6.201214513s

• [SLOW TEST:17.813 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:24:03.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 14 11:24:20.103: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 14 11:24:20.123: INFO: Pod pod-with-poststart-http-hook still exists
Feb 14 11:24:22.124: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 14 11:24:22.151: INFO: Pod pod-with-poststart-http-hook still exists
Feb 14 11:24:24.124: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 14 11:24:24.331: INFO: Pod pod-with-poststart-http-hook still exists
Feb 14 11:24:26.124: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 14 11:24:26.138: INFO: Pod pod-with-poststart-http-hook still exists
Feb 14 11:24:28.124: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 14 11:24:28.143: INFO: Pod pod-with-poststart-http-hook still exists
Feb 14 11:24:30.124: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 14 11:24:30.150: INFO: Pod pod-with-poststart-http-hook still exists
Feb 14 11:24:32.124: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 14 11:24:32.147: INFO: Pod pod-with-poststart-http-hook still exists
Feb 14 11:24:34.124: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 14 11:24:34.151: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:24:34.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-dvhqv" for this suite.
Feb 14 11:24:48.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:24:48.468: INFO: namespace: e2e-tests-container-lifecycle-hook-dvhqv, resource: bindings, ignored listing per whitelist
Feb 14 11:24:48.499: INFO: namespace e2e-tests-container-lifecycle-hook-dvhqv deletion completed in 14.332369455s

• [SLOW TEST:45.102 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:24:48.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 14 11:24:48.768: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:25:04.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-l5nrt" for this suite.
Feb 14 11:25:10.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:25:10.740: INFO: namespace: e2e-tests-init-container-l5nrt, resource: bindings, ignored listing per whitelist
Feb 14 11:25:10.750: INFO: namespace e2e-tests-init-container-l5nrt deletion completed in 6.418309042s

• [SLOW TEST:22.251 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:25:10.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Feb 14 11:25:11.650: INFO: Waiting up to 5m0s for pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-khdm5" in namespace "e2e-tests-svcaccounts-hkljc" to be "success or failure"
Feb 14 11:25:11.693: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-khdm5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.657459ms
Feb 14 11:25:13.706: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-khdm5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055903126s
Feb 14 11:25:15.743: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-khdm5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092784762s
Feb 14 11:25:17.766: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-khdm5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116003677s
Feb 14 11:25:20.171: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-khdm5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.521381496s
Feb 14 11:25:22.405: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-khdm5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.755199836s
Feb 14 11:25:24.426: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-khdm5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.776264424s
Feb 14 11:25:26.444: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-khdm5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.793538628s
STEP: Saw pod success
Feb 14 11:25:26.444: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-khdm5" satisfied condition "success or failure"
Feb 14 11:25:26.453: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-khdm5 container token-test: 
STEP: delete the pod
Feb 14 11:25:27.706: INFO: Waiting for pod pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-khdm5 to disappear
Feb 14 11:25:27.910: INFO: Pod pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-khdm5 no longer exists
STEP: Creating a pod to test consume service account root CA
Feb 14 11:25:28.198: INFO: Waiting up to 5m0s for pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-p6c7t" in namespace "e2e-tests-svcaccounts-hkljc" to be "success or failure"
Feb 14 11:25:28.208: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-p6c7t": Phase="Pending", Reason="", readiness=false. Elapsed: 9.508764ms
Feb 14 11:25:30.389: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-p6c7t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190350937s
Feb 14 11:25:32.406: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-p6c7t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208037356s
Feb 14 11:25:34.692: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-p6c7t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.493933396s
Feb 14 11:25:36.745: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-p6c7t": Phase="Pending", Reason="", readiness=false. Elapsed: 8.546618007s
Feb 14 11:25:39.055: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-p6c7t": Phase="Pending", Reason="", readiness=false. Elapsed: 10.856211091s
Feb 14 11:25:41.648: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-p6c7t": Phase="Pending", Reason="", readiness=false. Elapsed: 13.449556622s
Feb 14 11:25:43.659: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-p6c7t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.460576159s
STEP: Saw pod success
Feb 14 11:25:43.659: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-p6c7t" satisfied condition "success or failure"
Feb 14 11:25:43.666: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-p6c7t container root-ca-test: 
STEP: delete the pod
Feb 14 11:25:43.893: INFO: Waiting for pod pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-p6c7t to disappear
Feb 14 11:25:43.905: INFO: Pod pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-p6c7t no longer exists
STEP: Creating a pod to test consume service account namespace
Feb 14 11:25:44.074: INFO: Waiting up to 5m0s for pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-sbh2b" in namespace "e2e-tests-svcaccounts-hkljc" to be "success or failure"
Feb 14 11:25:44.094: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-sbh2b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.509133ms
Feb 14 11:25:46.213: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-sbh2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138896313s
Feb 14 11:25:48.230: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-sbh2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155442015s
Feb 14 11:25:50.243: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-sbh2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169027416s
Feb 14 11:25:52.394: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-sbh2b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.320099788s
Feb 14 11:25:54.409: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-sbh2b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.334951998s
Feb 14 11:25:56.568: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-sbh2b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.493538424s
Feb 14 11:25:58.600: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-sbh2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.525595823s
STEP: Saw pod success
Feb 14 11:25:58.600: INFO: Pod "pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-sbh2b" satisfied condition "success or failure"
Feb 14 11:25:58.609: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-sbh2b container namespace-test: 
STEP: delete the pod
Feb 14 11:25:58.813: INFO: Waiting for pod pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-sbh2b to disappear
Feb 14 11:25:58.830: INFO: Pod pod-service-account-a9e54903-4f1c-11ea-af88-0242ac110007-sbh2b no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:25:58.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-hkljc" for this suite.
Feb 14 11:26:06.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:26:07.046: INFO: namespace: e2e-tests-svcaccounts-hkljc, resource: bindings, ignored listing per whitelist
Feb 14 11:26:07.108: INFO: namespace e2e-tests-svcaccounts-hkljc deletion completed in 8.262317893s

• [SLOW TEST:56.358 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:26:07.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 14 11:26:07.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 14 11:26:07.468: INFO: stderr: ""
Feb 14 11:26:07.469: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:26:07.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-j9ljt" for this suite.
Feb 14 11:26:13.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:26:13.764: INFO: namespace: e2e-tests-kubectl-j9ljt, resource: bindings, ignored listing per whitelist
Feb 14 11:26:13.764: INFO: namespace e2e-tests-kubectl-j9ljt deletion completed in 6.277925371s

• [SLOW TEST:6.656 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:26:13.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:26:22.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-jtqpj" for this suite.
Feb 14 11:27:16.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:27:16.252: INFO: namespace: e2e-tests-kubelet-test-jtqpj, resource: bindings, ignored listing per whitelist
Feb 14 11:27:16.341: INFO: namespace e2e-tests-kubelet-test-jtqpj deletion completed in 54.176579007s

• [SLOW TEST:62.576 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:27:16.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-f458a291-4f1c-11ea-af88-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 14 11:27:16.620: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f463b53a-4f1c-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-xfggg" to be "success or failure"
Feb 14 11:27:16.642: INFO: Pod "pod-projected-configmaps-f463b53a-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 22.102066ms
Feb 14 11:27:18.662: INFO: Pod "pod-projected-configmaps-f463b53a-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041831893s
Feb 14 11:27:20.718: INFO: Pod "pod-projected-configmaps-f463b53a-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098458422s
Feb 14 11:27:22.731: INFO: Pod "pod-projected-configmaps-f463b53a-4f1c-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111281293s
Feb 14 11:27:24.754: INFO: Pod "pod-projected-configmaps-f463b53a-4f1c-11ea-af88-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 8.134040154s
Feb 14 11:27:26.773: INFO: Pod "pod-projected-configmaps-f463b53a-4f1c-11ea-af88-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 10.153083377s
Feb 14 11:27:28.800: INFO: Pod "pod-projected-configmaps-f463b53a-4f1c-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.17993658s
STEP: Saw pod success
Feb 14 11:27:28.800: INFO: Pod "pod-projected-configmaps-f463b53a-4f1c-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:27:28.833: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-f463b53a-4f1c-11ea-af88-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 14 11:27:29.073: INFO: Waiting for pod pod-projected-configmaps-f463b53a-4f1c-11ea-af88-0242ac110007 to disappear
Feb 14 11:27:29.081: INFO: Pod pod-projected-configmaps-f463b53a-4f1c-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:27:29.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xfggg" for this suite.
Feb 14 11:27:35.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:27:35.257: INFO: namespace: e2e-tests-projected-xfggg, resource: bindings, ignored listing per whitelist
Feb 14 11:27:35.281: INFO: namespace e2e-tests-projected-xfggg deletion completed in 6.191411693s

• [SLOW TEST:18.939 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:27:35.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-d5lp
STEP: Creating a pod to test atomic-volume-subpath
Feb 14 11:27:35.491: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-d5lp" in namespace "e2e-tests-subpath-bpfnq" to be "success or failure"
Feb 14 11:27:35.505: INFO: Pod "pod-subpath-test-configmap-d5lp": Phase="Pending", Reason="", readiness=false. Elapsed: 14.057351ms
Feb 14 11:27:37.718: INFO: Pod "pod-subpath-test-configmap-d5lp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227510215s
Feb 14 11:27:39.740: INFO: Pod "pod-subpath-test-configmap-d5lp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.24880956s
Feb 14 11:27:42.052: INFO: Pod "pod-subpath-test-configmap-d5lp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.560738587s
Feb 14 11:27:44.069: INFO: Pod "pod-subpath-test-configmap-d5lp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.578509338s
Feb 14 11:27:46.084: INFO: Pod "pod-subpath-test-configmap-d5lp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.593251228s
Feb 14 11:27:48.102: INFO: Pod "pod-subpath-test-configmap-d5lp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.611305194s
Feb 14 11:27:50.119: INFO: Pod "pod-subpath-test-configmap-d5lp": Phase="Pending", Reason="", readiness=false. Elapsed: 14.628521837s
Feb 14 11:27:52.152: INFO: Pod "pod-subpath-test-configmap-d5lp": Phase="Running", Reason="", readiness=false. Elapsed: 16.661193488s
Feb 14 11:27:54.163: INFO: Pod "pod-subpath-test-configmap-d5lp": Phase="Running", Reason="", readiness=false. Elapsed: 18.672571472s
Feb 14 11:27:56.183: INFO: Pod "pod-subpath-test-configmap-d5lp": Phase="Running", Reason="", readiness=false. Elapsed: 20.691843205s
Feb 14 11:27:58.196: INFO: Pod "pod-subpath-test-configmap-d5lp": Phase="Running", Reason="", readiness=false. Elapsed: 22.70508732s
Feb 14 11:28:00.213: INFO: Pod "pod-subpath-test-configmap-d5lp": Phase="Running", Reason="", readiness=false. Elapsed: 24.721573572s
Feb 14 11:28:02.227: INFO: Pod "pod-subpath-test-configmap-d5lp": Phase="Running", Reason="", readiness=false. Elapsed: 26.736132241s
Feb 14 11:28:04.245: INFO: Pod "pod-subpath-test-configmap-d5lp": Phase="Running", Reason="", readiness=false. Elapsed: 28.754435509s
Feb 14 11:28:06.267: INFO: Pod "pod-subpath-test-configmap-d5lp": Phase="Running", Reason="", readiness=false. Elapsed: 30.775963277s
Feb 14 11:28:08.282: INFO: Pod "pod-subpath-test-configmap-d5lp": Phase="Running", Reason="", readiness=false. Elapsed: 32.790765245s
Feb 14 11:28:10.296: INFO: Pod "pod-subpath-test-configmap-d5lp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.804870858s
STEP: Saw pod success
Feb 14 11:28:10.296: INFO: Pod "pod-subpath-test-configmap-d5lp" satisfied condition "success or failure"
Feb 14 11:28:10.301: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-d5lp container test-container-subpath-configmap-d5lp: 
STEP: delete the pod
Feb 14 11:28:10.542: INFO: Waiting for pod pod-subpath-test-configmap-d5lp to disappear
Feb 14 11:28:10.609: INFO: Pod pod-subpath-test-configmap-d5lp no longer exists
STEP: Deleting pod pod-subpath-test-configmap-d5lp
Feb 14 11:28:10.610: INFO: Deleting pod "pod-subpath-test-configmap-d5lp" in namespace "e2e-tests-subpath-bpfnq"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:28:11.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-bpfnq" for this suite.
Feb 14 11:28:17.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:28:18.217: INFO: namespace: e2e-tests-subpath-bpfnq, resource: bindings, ignored listing per whitelist
Feb 14 11:28:18.217: INFO: namespace e2e-tests-subpath-bpfnq deletion completed in 6.588517015s

• [SLOW TEST:42.935 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:28:18.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 14 11:28:18.455: INFO: Waiting up to 5m0s for pod "downwardapi-volume-193da0fc-4f1d-11ea-af88-0242ac110007" in namespace "e2e-tests-downward-api-kh5zn" to be "success or failure"
Feb 14 11:28:18.503: INFO: Pod "downwardapi-volume-193da0fc-4f1d-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 47.133501ms
Feb 14 11:28:20.552: INFO: Pod "downwardapi-volume-193da0fc-4f1d-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096287547s
Feb 14 11:28:22.605: INFO: Pod "downwardapi-volume-193da0fc-4f1d-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149208713s
Feb 14 11:28:24.627: INFO: Pod "downwardapi-volume-193da0fc-4f1d-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.171031697s
Feb 14 11:28:26.697: INFO: Pod "downwardapi-volume-193da0fc-4f1d-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.24148692s
Feb 14 11:28:28.714: INFO: Pod "downwardapi-volume-193da0fc-4f1d-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.258748765s
STEP: Saw pod success
Feb 14 11:28:28.714: INFO: Pod "downwardapi-volume-193da0fc-4f1d-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:28:28.719: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-193da0fc-4f1d-11ea-af88-0242ac110007 container client-container: 
STEP: delete the pod
Feb 14 11:28:28.827: INFO: Waiting for pod downwardapi-volume-193da0fc-4f1d-11ea-af88-0242ac110007 to disappear
Feb 14 11:28:28.856: INFO: Pod downwardapi-volume-193da0fc-4f1d-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:28:28.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-kh5zn" for this suite.
Feb 14 11:28:35.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:28:35.445: INFO: namespace: e2e-tests-downward-api-kh5zn, resource: bindings, ignored listing per whitelist
Feb 14 11:28:35.484: INFO: namespace e2e-tests-downward-api-kh5zn deletion completed in 6.613783585s

• [SLOW TEST:17.267 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:28:35.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 14 11:28:35.888: INFO: Waiting up to 5m0s for pod "downwardapi-volume-239eed81-4f1d-11ea-af88-0242ac110007" in namespace "e2e-tests-downward-api-dvgld" to be "success or failure"
Feb 14 11:28:35.979: INFO: Pod "downwardapi-volume-239eed81-4f1d-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 91.083149ms
Feb 14 11:28:38.090: INFO: Pod "downwardapi-volume-239eed81-4f1d-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201881303s
Feb 14 11:28:40.111: INFO: Pod "downwardapi-volume-239eed81-4f1d-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.223520832s
Feb 14 11:28:42.279: INFO: Pod "downwardapi-volume-239eed81-4f1d-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.390812s
Feb 14 11:28:44.324: INFO: Pod "downwardapi-volume-239eed81-4f1d-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.4359781s
Feb 14 11:28:46.344: INFO: Pod "downwardapi-volume-239eed81-4f1d-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.456085795s
STEP: Saw pod success
Feb 14 11:28:46.344: INFO: Pod "downwardapi-volume-239eed81-4f1d-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:28:46.350: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-239eed81-4f1d-11ea-af88-0242ac110007 container client-container: 
STEP: delete the pod
Feb 14 11:28:46.593: INFO: Waiting for pod downwardapi-volume-239eed81-4f1d-11ea-af88-0242ac110007 to disappear
Feb 14 11:28:46.612: INFO: Pod downwardapi-volume-239eed81-4f1d-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:28:46.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-dvgld" for this suite.
Feb 14 11:28:52.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:28:52.798: INFO: namespace: e2e-tests-downward-api-dvgld, resource: bindings, ignored listing per whitelist
Feb 14 11:28:52.876: INFO: namespace e2e-tests-downward-api-dvgld deletion completed in 6.2390999s

• [SLOW TEST:17.391 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:28:52.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 14 11:28:53.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-hlmlq'
Feb 14 11:28:55.115: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 14 11:28:55.115: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Feb 14 11:28:55.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-hlmlq'
Feb 14 11:28:55.426: INFO: stderr: ""
Feb 14 11:28:55.427: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:28:55.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hlmlq" for this suite.
Feb 14 11:29:03.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:29:03.595: INFO: namespace: e2e-tests-kubectl-hlmlq, resource: bindings, ignored listing per whitelist
Feb 14 11:29:03.696: INFO: namespace e2e-tests-kubectl-hlmlq deletion completed in 8.251169632s

• [SLOW TEST:10.820 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:29:03.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-pn4lx
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Feb 14 11:29:04.203: INFO: Found 0 stateful pods, waiting for 3
Feb 14 11:29:14.260: INFO: Found 2 stateful pods, waiting for 3
Feb 14 11:29:24.228: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 11:29:24.228: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 11:29:24.228: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 14 11:29:34.229: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 11:29:34.229: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 11:29:34.229: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 11:29:34.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn4lx ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 14 11:29:34.936: INFO: stderr: "I0214 11:29:34.514729    1521 log.go:172] (0xc000720370) (0xc00073e640) Create stream\nI0214 11:29:34.515136    1521 log.go:172] (0xc000720370) (0xc00073e640) Stream added, broadcasting: 1\nI0214 11:29:34.524840    1521 log.go:172] (0xc000720370) Reply frame received for 1\nI0214 11:29:34.524969    1521 log.go:172] (0xc000720370) (0xc0007c8e60) Create stream\nI0214 11:29:34.524983    1521 log.go:172] (0xc000720370) (0xc0007c8e60) Stream added, broadcasting: 3\nI0214 11:29:34.526382    1521 log.go:172] (0xc000720370) Reply frame received for 3\nI0214 11:29:34.526423    1521 log.go:172] (0xc000720370) (0xc00073e6e0) Create stream\nI0214 11:29:34.526439    1521 log.go:172] (0xc000720370) (0xc00073e6e0) Stream added, broadcasting: 5\nI0214 11:29:34.528086    1521 log.go:172] (0xc000720370) Reply frame received for 5\nI0214 11:29:34.734407    1521 log.go:172] (0xc000720370) Data frame received for 3\nI0214 11:29:34.734524    1521 log.go:172] (0xc0007c8e60) (3) Data frame handling\nI0214 11:29:34.734585    1521 log.go:172] (0xc0007c8e60) (3) Data frame sent\nI0214 11:29:34.926466    1521 log.go:172] (0xc000720370) (0xc0007c8e60) Stream removed, broadcasting: 3\nI0214 11:29:34.926820    1521 log.go:172] (0xc000720370) (0xc00073e6e0) Stream removed, broadcasting: 5\nI0214 11:29:34.926881    1521 log.go:172] (0xc000720370) Data frame received for 1\nI0214 11:29:34.926901    1521 log.go:172] (0xc00073e640) (1) Data frame handling\nI0214 11:29:34.926930    1521 log.go:172] (0xc00073e640) (1) Data frame sent\nI0214 11:29:34.926949    1521 log.go:172] (0xc000720370) (0xc00073e640) Stream removed, broadcasting: 1\nI0214 11:29:34.926976    1521 log.go:172] (0xc000720370) Go away received\nI0214 11:29:34.927293    1521 log.go:172] (0xc000720370) (0xc00073e640) Stream removed, broadcasting: 1\nI0214 11:29:34.927310    1521 log.go:172] (0xc000720370) (0xc0007c8e60) Stream removed, broadcasting: 3\nI0214 11:29:34.927317    1521 log.go:172] (0xc000720370) (0xc00073e6e0) Stream removed, broadcasting: 5\n"
Feb 14 11:29:34.936: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 14 11:29:34.937: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 14 11:29:45.040: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 14 11:29:55.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn4lx ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:29:55.824: INFO: stderr: "I0214 11:29:55.505664    1544 log.go:172] (0xc00072c2c0) (0xc000659360) Create stream\nI0214 11:29:55.505800    1544 log.go:172] (0xc00072c2c0) (0xc000659360) Stream added, broadcasting: 1\nI0214 11:29:55.511783    1544 log.go:172] (0xc00072c2c0) Reply frame received for 1\nI0214 11:29:55.511924    1544 log.go:172] (0xc00072c2c0) (0xc000532000) Create stream\nI0214 11:29:55.511948    1544 log.go:172] (0xc00072c2c0) (0xc000532000) Stream added, broadcasting: 3\nI0214 11:29:55.515016    1544 log.go:172] (0xc00072c2c0) Reply frame received for 3\nI0214 11:29:55.515088    1544 log.go:172] (0xc00072c2c0) (0xc00067e000) Create stream\nI0214 11:29:55.515114    1544 log.go:172] (0xc00072c2c0) (0xc00067e000) Stream added, broadcasting: 5\nI0214 11:29:55.516766    1544 log.go:172] (0xc00072c2c0) Reply frame received for 5\nI0214 11:29:55.672042    1544 log.go:172] (0xc00072c2c0) Data frame received for 3\nI0214 11:29:55.672131    1544 log.go:172] (0xc000532000) (3) Data frame handling\nI0214 11:29:55.672167    1544 log.go:172] (0xc000532000) (3) Data frame sent\nI0214 11:29:55.807135    1544 log.go:172] (0xc00072c2c0) Data frame received for 1\nI0214 11:29:55.807331    1544 log.go:172] (0xc00072c2c0) (0xc00067e000) Stream removed, broadcasting: 5\nI0214 11:29:55.807535    1544 log.go:172] (0xc00072c2c0) (0xc000532000) Stream removed, broadcasting: 3\nI0214 11:29:55.807656    1544 log.go:172] (0xc000659360) (1) Data frame handling\nI0214 11:29:55.807773    1544 log.go:172] (0xc000659360) (1) Data frame sent\nI0214 11:29:55.807836    1544 log.go:172] (0xc00072c2c0) (0xc000659360) Stream removed, broadcasting: 1\nI0214 11:29:55.807889    1544 log.go:172] (0xc00072c2c0) Go away received\nI0214 11:29:55.808547    1544 log.go:172] (0xc00072c2c0) (0xc000659360) Stream removed, broadcasting: 1\nI0214 11:29:55.808579    1544 log.go:172] (0xc00072c2c0) (0xc000532000) Stream removed, broadcasting: 3\nI0214 11:29:55.808596    1544 log.go:172] (0xc00072c2c0) (0xc00067e000) Stream removed, broadcasting: 5\n"
Feb 14 11:29:55.825: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 14 11:29:55.825: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 14 11:30:05.886: INFO: Waiting for StatefulSet e2e-tests-statefulset-pn4lx/ss2 to complete update
Feb 14 11:30:05.886: INFO: Waiting for Pod e2e-tests-statefulset-pn4lx/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 14 11:30:05.886: INFO: Waiting for Pod e2e-tests-statefulset-pn4lx/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 14 11:30:15.917: INFO: Waiting for StatefulSet e2e-tests-statefulset-pn4lx/ss2 to complete update
Feb 14 11:30:15.917: INFO: Waiting for Pod e2e-tests-statefulset-pn4lx/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 14 11:30:15.918: INFO: Waiting for Pod e2e-tests-statefulset-pn4lx/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 14 11:30:25.905: INFO: Waiting for StatefulSet e2e-tests-statefulset-pn4lx/ss2 to complete update
Feb 14 11:30:25.905: INFO: Waiting for Pod e2e-tests-statefulset-pn4lx/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 14 11:30:35.924: INFO: Waiting for StatefulSet e2e-tests-statefulset-pn4lx/ss2 to complete update
Feb 14 11:30:35.924: INFO: Waiting for Pod e2e-tests-statefulset-pn4lx/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 14 11:30:45.918: INFO: Waiting for StatefulSet e2e-tests-statefulset-pn4lx/ss2 to complete update
STEP: Rolling back to a previous revision
Feb 14 11:30:55.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn4lx ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 14 11:30:56.775: INFO: stderr: "I0214 11:30:56.216705    1567 log.go:172] (0xc0001386e0) (0xc000720640) Create stream\nI0214 11:30:56.217219    1567 log.go:172] (0xc0001386e0) (0xc000720640) Stream added, broadcasting: 1\nI0214 11:30:56.228873    1567 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0214 11:30:56.228921    1567 log.go:172] (0xc0001386e0) (0xc0007206e0) Create stream\nI0214 11:30:56.228929    1567 log.go:172] (0xc0001386e0) (0xc0007206e0) Stream added, broadcasting: 3\nI0214 11:30:56.230896    1567 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0214 11:30:56.230953    1567 log.go:172] (0xc0001386e0) (0xc000310c80) Create stream\nI0214 11:30:56.230972    1567 log.go:172] (0xc0001386e0) (0xc000310c80) Stream added, broadcasting: 5\nI0214 11:30:56.238461    1567 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0214 11:30:56.473627    1567 log.go:172] (0xc0001386e0) Data frame received for 3\nI0214 11:30:56.473717    1567 log.go:172] (0xc0007206e0) (3) Data frame handling\nI0214 11:30:56.473754    1567 log.go:172] (0xc0007206e0) (3) Data frame sent\nI0214 11:30:56.758705    1567 log.go:172] (0xc0001386e0) Data frame received for 1\nI0214 11:30:56.758878    1567 log.go:172] (0xc0001386e0) (0xc0007206e0) Stream removed, broadcasting: 3\nI0214 11:30:56.759182    1567 log.go:172] (0xc000720640) (1) Data frame handling\nI0214 11:30:56.759298    1567 log.go:172] (0xc000720640) (1) Data frame sent\nI0214 11:30:56.759326    1567 log.go:172] (0xc0001386e0) (0xc000310c80) Stream removed, broadcasting: 5\nI0214 11:30:56.759420    1567 log.go:172] (0xc0001386e0) (0xc000720640) Stream removed, broadcasting: 1\nI0214 11:30:56.759454    1567 log.go:172] (0xc0001386e0) Go away received\nI0214 11:30:56.760565    1567 log.go:172] (0xc0001386e0) (0xc000720640) Stream removed, broadcasting: 1\nI0214 11:30:56.760578    1567 log.go:172] (0xc0001386e0) (0xc0007206e0) Stream removed, broadcasting: 3\nI0214 11:30:56.760582    1567 log.go:172] (0xc0001386e0) (0xc000310c80) Stream removed, broadcasting: 5\n"
Feb 14 11:30:56.776: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 14 11:30:56.776: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 14 11:31:06.910: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 14 11:31:16.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pn4lx ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:31:17.549: INFO: stderr: "I0214 11:31:17.248708    1589 log.go:172] (0xc0006d42c0) (0xc0005a72c0) Create stream\nI0214 11:31:17.249350    1589 log.go:172] (0xc0006d42c0) (0xc0005a72c0) Stream added, broadcasting: 1\nI0214 11:31:17.259167    1589 log.go:172] (0xc0006d42c0) Reply frame received for 1\nI0214 11:31:17.259258    1589 log.go:172] (0xc0006d42c0) (0xc000718000) Create stream\nI0214 11:31:17.259273    1589 log.go:172] (0xc0006d42c0) (0xc000718000) Stream added, broadcasting: 3\nI0214 11:31:17.260392    1589 log.go:172] (0xc0006d42c0) Reply frame received for 3\nI0214 11:31:17.260482    1589 log.go:172] (0xc0006d42c0) (0xc0005ca000) Create stream\nI0214 11:31:17.260506    1589 log.go:172] (0xc0006d42c0) (0xc0005ca000) Stream added, broadcasting: 5\nI0214 11:31:17.261713    1589 log.go:172] (0xc0006d42c0) Reply frame received for 5\nI0214 11:31:17.409901    1589 log.go:172] (0xc0006d42c0) Data frame received for 3\nI0214 11:31:17.410029    1589 log.go:172] (0xc000718000) (3) Data frame handling\nI0214 11:31:17.410061    1589 log.go:172] (0xc000718000) (3) Data frame sent\nI0214 11:31:17.535884    1589 log.go:172] (0xc0006d42c0) Data frame received for 1\nI0214 11:31:17.536051    1589 log.go:172] (0xc0006d42c0) (0xc000718000) Stream removed, broadcasting: 3\nI0214 11:31:17.536125    1589 log.go:172] (0xc0005a72c0) (1) Data frame handling\nI0214 11:31:17.536156    1589 log.go:172] (0xc0005a72c0) (1) Data frame sent\nI0214 11:31:17.536201    1589 log.go:172] (0xc0006d42c0) (0xc0005ca000) Stream removed, broadcasting: 5\nI0214 11:31:17.536229    1589 log.go:172] (0xc0006d42c0) (0xc0005a72c0) Stream removed, broadcasting: 1\nI0214 11:31:17.536281    1589 log.go:172] (0xc0006d42c0) Go away received\nI0214 11:31:17.536757    1589 log.go:172] (0xc0006d42c0) (0xc0005a72c0) Stream removed, broadcasting: 1\nI0214 11:31:17.536779    1589 log.go:172] (0xc0006d42c0) (0xc000718000) Stream removed, broadcasting: 3\nI0214 11:31:17.536789    1589 log.go:172] (0xc0006d42c0) (0xc0005ca000) Stream removed, broadcasting: 5\n"
Feb 14 11:31:17.549: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 14 11:31:17.549: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 14 11:31:27.609: INFO: Waiting for StatefulSet e2e-tests-statefulset-pn4lx/ss2 to complete update
Feb 14 11:31:27.609: INFO: Waiting for Pod e2e-tests-statefulset-pn4lx/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 14 11:31:27.609: INFO: Waiting for Pod e2e-tests-statefulset-pn4lx/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 14 11:31:27.609: INFO: Waiting for Pod e2e-tests-statefulset-pn4lx/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 14 11:31:37.640: INFO: Waiting for StatefulSet e2e-tests-statefulset-pn4lx/ss2 to complete update
Feb 14 11:31:37.641: INFO: Waiting for Pod e2e-tests-statefulset-pn4lx/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 14 11:31:37.641: INFO: Waiting for Pod e2e-tests-statefulset-pn4lx/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 14 11:31:57.633: INFO: Waiting for StatefulSet e2e-tests-statefulset-pn4lx/ss2 to complete update
Feb 14 11:31:57.634: INFO: Waiting for Pod e2e-tests-statefulset-pn4lx/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 14 11:32:07.626: INFO: Waiting for StatefulSet e2e-tests-statefulset-pn4lx/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 14 11:32:17.651: INFO: Deleting all statefulset in ns e2e-tests-statefulset-pn4lx
Feb 14 11:32:17.656: INFO: Scaling statefulset ss2 to 0
Feb 14 11:32:57.708: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 11:32:57.715: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:32:57.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-pn4lx" for this suite.
Feb 14 11:33:05.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:33:05.907: INFO: namespace: e2e-tests-statefulset-pn4lx, resource: bindings, ignored listing per whitelist
Feb 14 11:33:05.958: INFO: namespace e2e-tests-statefulset-pn4lx deletion completed in 8.195892208s

• [SLOW TEST:242.261 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:33:05.959: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 14 11:33:06.181: INFO: Waiting up to 5m0s for pod "pod-c4be820e-4f1d-11ea-af88-0242ac110007" in namespace "e2e-tests-emptydir-j4289" to be "success or failure"
Feb 14 11:33:06.216: INFO: Pod "pod-c4be820e-4f1d-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 34.6522ms
Feb 14 11:33:08.327: INFO: Pod "pod-c4be820e-4f1d-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144987462s
Feb 14 11:33:10.351: INFO: Pod "pod-c4be820e-4f1d-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169355707s
Feb 14 11:33:12.698: INFO: Pod "pod-c4be820e-4f1d-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.516138546s
Feb 14 11:33:14.713: INFO: Pod "pod-c4be820e-4f1d-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.531812284s
Feb 14 11:33:16.729: INFO: Pod "pod-c4be820e-4f1d-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.547621472s
STEP: Saw pod success
Feb 14 11:33:16.729: INFO: Pod "pod-c4be820e-4f1d-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:33:16.735: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c4be820e-4f1d-11ea-af88-0242ac110007 container test-container: 
STEP: delete the pod
Feb 14 11:33:16.847: INFO: Waiting for pod pod-c4be820e-4f1d-11ea-af88-0242ac110007 to disappear
Feb 14 11:33:16.900: INFO: Pod pod-c4be820e-4f1d-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:33:16.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-j4289" for this suite.
Feb 14 11:33:22.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:33:23.179: INFO: namespace: e2e-tests-emptydir-j4289, resource: bindings, ignored listing per whitelist
Feb 14 11:33:23.189: INFO: namespace e2e-tests-emptydir-j4289 deletion completed in 6.279425427s

• [SLOW TEST:17.230 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:33:23.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb 14 11:33:23.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:23.786: INFO: stderr: ""
Feb 14 11:33:23.786: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 14 11:33:23.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:23.986: INFO: stderr: ""
Feb 14 11:33:23.987: INFO: stdout: "update-demo-nautilus-ftrwt update-demo-nautilus-xfd76 "
Feb 14 11:33:23.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftrwt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:24.233: INFO: stderr: ""
Feb 14 11:33:24.233: INFO: stdout: ""
Feb 14 11:33:24.233: INFO: update-demo-nautilus-ftrwt is created but not running
Feb 14 11:33:29.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:29.336: INFO: stderr: ""
Feb 14 11:33:29.336: INFO: stdout: "update-demo-nautilus-ftrwt update-demo-nautilus-xfd76 "
Feb 14 11:33:29.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftrwt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:29.450: INFO: stderr: ""
Feb 14 11:33:29.450: INFO: stdout: ""
Feb 14 11:33:29.450: INFO: update-demo-nautilus-ftrwt is created but not running
Feb 14 11:33:34.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:34.598: INFO: stderr: ""
Feb 14 11:33:34.598: INFO: stdout: "update-demo-nautilus-ftrwt update-demo-nautilus-xfd76 "
Feb 14 11:33:34.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftrwt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:34.704: INFO: stderr: ""
Feb 14 11:33:34.704: INFO: stdout: "true"
Feb 14 11:33:34.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftrwt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:34.823: INFO: stderr: ""
Feb 14 11:33:34.823: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 11:33:34.823: INFO: validating pod update-demo-nautilus-ftrwt
Feb 14 11:33:34.833: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 11:33:34.833: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 11:33:34.833: INFO: update-demo-nautilus-ftrwt is verified up and running
Feb 14 11:33:34.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xfd76 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:34.988: INFO: stderr: ""
Feb 14 11:33:34.989: INFO: stdout: ""
Feb 14 11:33:34.989: INFO: update-demo-nautilus-xfd76 is created but not running
Feb 14 11:33:39.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:40.163: INFO: stderr: ""
Feb 14 11:33:40.163: INFO: stdout: "update-demo-nautilus-ftrwt update-demo-nautilus-xfd76 "
Feb 14 11:33:40.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftrwt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:40.317: INFO: stderr: ""
Feb 14 11:33:40.318: INFO: stdout: "true"
Feb 14 11:33:40.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftrwt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:40.454: INFO: stderr: ""
Feb 14 11:33:40.454: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 11:33:40.454: INFO: validating pod update-demo-nautilus-ftrwt
Feb 14 11:33:40.495: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 11:33:40.496: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 11:33:40.496: INFO: update-demo-nautilus-ftrwt is verified up and running
Feb 14 11:33:40.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xfd76 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:40.666: INFO: stderr: ""
Feb 14 11:33:40.666: INFO: stdout: "true"
Feb 14 11:33:40.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xfd76 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:40.771: INFO: stderr: ""
Feb 14 11:33:40.771: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 11:33:40.771: INFO: validating pod update-demo-nautilus-xfd76
Feb 14 11:33:40.781: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 11:33:40.781: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 11:33:40.781: INFO: update-demo-nautilus-xfd76 is verified up and running
STEP: scaling down the replication controller
Feb 14 11:33:40.783: INFO: scanned /root for discovery docs: 
Feb 14 11:33:40.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:42.642: INFO: stderr: ""
Feb 14 11:33:42.642: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 14 11:33:42.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:43.181: INFO: stderr: ""
Feb 14 11:33:43.181: INFO: stdout: "update-demo-nautilus-ftrwt update-demo-nautilus-xfd76 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 14 11:33:48.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:48.390: INFO: stderr: ""
Feb 14 11:33:48.390: INFO: stdout: "update-demo-nautilus-ftrwt "
Feb 14 11:33:48.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftrwt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:48.612: INFO: stderr: ""
Feb 14 11:33:48.612: INFO: stdout: "true"
Feb 14 11:33:48.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftrwt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:48.711: INFO: stderr: ""
Feb 14 11:33:48.711: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 11:33:48.711: INFO: validating pod update-demo-nautilus-ftrwt
Feb 14 11:33:48.719: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 11:33:48.719: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 11:33:48.719: INFO: update-demo-nautilus-ftrwt is verified up and running
STEP: scaling up the replication controller
Feb 14 11:33:48.721: INFO: scanned /root for discovery docs: 
Feb 14 11:33:48.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:49.971: INFO: stderr: ""
Feb 14 11:33:49.971: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 14 11:33:49.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:50.085: INFO: stderr: ""
Feb 14 11:33:50.085: INFO: stdout: "update-demo-nautilus-ftrwt update-demo-nautilus-mr76f "
Feb 14 11:33:50.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftrwt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:50.172: INFO: stderr: ""
Feb 14 11:33:50.173: INFO: stdout: "true"
Feb 14 11:33:50.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftrwt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:50.299: INFO: stderr: ""
Feb 14 11:33:50.299: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 11:33:50.299: INFO: validating pod update-demo-nautilus-ftrwt
Feb 14 11:33:50.305: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 11:33:50.305: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 11:33:50.305: INFO: update-demo-nautilus-ftrwt is verified up and running
Feb 14 11:33:50.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mr76f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:50.409: INFO: stderr: ""
Feb 14 11:33:50.409: INFO: stdout: ""
Feb 14 11:33:50.409: INFO: update-demo-nautilus-mr76f is created but not running
Feb 14 11:33:55.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:55.841: INFO: stderr: ""
Feb 14 11:33:55.841: INFO: stdout: "update-demo-nautilus-ftrwt update-demo-nautilus-mr76f "
Feb 14 11:33:55.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftrwt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:56.069: INFO: stderr: ""
Feb 14 11:33:56.069: INFO: stdout: "true"
Feb 14 11:33:56.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftrwt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:56.179: INFO: stderr: ""
Feb 14 11:33:56.179: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 11:33:56.179: INFO: validating pod update-demo-nautilus-ftrwt
Feb 14 11:33:56.187: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 11:33:56.187: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 11:33:56.187: INFO: update-demo-nautilus-ftrwt is verified up and running
Feb 14 11:33:56.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mr76f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:33:56.278: INFO: stderr: ""
Feb 14 11:33:56.279: INFO: stdout: ""
Feb 14 11:33:56.279: INFO: update-demo-nautilus-mr76f is created but not running
Feb 14 11:34:01.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:34:01.491: INFO: stderr: ""
Feb 14 11:34:01.491: INFO: stdout: "update-demo-nautilus-ftrwt update-demo-nautilus-mr76f "
Feb 14 11:34:01.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftrwt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:34:01.625: INFO: stderr: ""
Feb 14 11:34:01.625: INFO: stdout: "true"
Feb 14 11:34:01.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftrwt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:34:01.756: INFO: stderr: ""
Feb 14 11:34:01.756: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 11:34:01.756: INFO: validating pod update-demo-nautilus-ftrwt
Feb 14 11:34:01.763: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 11:34:01.763: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 11:34:01.763: INFO: update-demo-nautilus-ftrwt is verified up and running
Feb 14 11:34:01.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mr76f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:34:01.905: INFO: stderr: ""
Feb 14 11:34:01.906: INFO: stdout: "true"
Feb 14 11:34:01.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mr76f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:34:02.070: INFO: stderr: ""
Feb 14 11:34:02.070: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 11:34:02.070: INFO: validating pod update-demo-nautilus-mr76f
Feb 14 11:34:02.080: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 11:34:02.080: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 11:34:02.080: INFO: update-demo-nautilus-mr76f is verified up and running
STEP: using delete to clean up resources
Feb 14 11:34:02.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:34:02.259: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 11:34:02.260: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 14 11:34:02.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-7xrz7'
Feb 14 11:34:02.420: INFO: stderr: "No resources found.\n"
Feb 14 11:34:02.421: INFO: stdout: ""
Feb 14 11:34:02.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-7xrz7 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 14 11:34:02.587: INFO: stderr: ""
Feb 14 11:34:02.587: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:34:02.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7xrz7" for this suite.
Feb 14 11:34:26.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:34:26.711: INFO: namespace: e2e-tests-kubectl-7xrz7, resource: bindings, ignored listing per whitelist
Feb 14 11:34:26.801: INFO: namespace e2e-tests-kubectl-7xrz7 deletion completed in 24.197398109s

• [SLOW TEST:63.612 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:34:26.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 14 11:34:27.016: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 14 11:34:27.128: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 14 11:34:32.956: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 14 11:34:36.986: INFO: Creating deployment "test-rolling-update-deployment"
Feb 14 11:34:37.003: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 14 11:34:37.154: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 14 11:34:39.186: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 14 11:34:39.253: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276877, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276877, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276877, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276877, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 11:34:41.265: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276877, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276877, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276877, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276877, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 11:34:43.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276877, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276877, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276877, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276877, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 11:34:45.294: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276877, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276877, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276877, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276877, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 11:34:47.330: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276877, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276877, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276877, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276877, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 11:34:49.275: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 14 11:34:49.290: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-6txts,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6txts/deployments/test-rolling-update-deployment,UID:fae0b274-4f1d-11ea-a994-fa163e34d433,ResourceVersion:21637356,Generation:1,CreationTimestamp:2020-02-14 11:34:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-14 11:34:37 +0000 UTC 2020-02-14 11:34:37 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-14 11:34:47 +0000 UTC 2020-02-14 11:34:37 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 14 11:34:49.294: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-6txts,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6txts/replicasets/test-rolling-update-deployment-75db98fb4c,UID:fafd88d8-4f1d-11ea-a994-fa163e34d433,ResourceVersion:21637346,Generation:1,CreationTimestamp:2020-02-14 11:34:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment fae0b274-4f1d-11ea-a994-fa163e34d433 0xc001eda857 0xc001eda858}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 14 11:34:49.294: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 14 11:34:49.294: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-6txts,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6txts/replicasets/test-rolling-update-controller,UID:f4efa806-4f1d-11ea-a994-fa163e34d433,ResourceVersion:21637355,Generation:2,CreationTimestamp:2020-02-14 11:34:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment fae0b274-4f1d-11ea-a994-fa163e34d433 0xc001eda73f 0xc001eda750}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 14 11:34:49.299: INFO: Pod "test-rolling-update-deployment-75db98fb4c-fcfhn" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-fcfhn,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-6txts,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6txts/pods/test-rolling-update-deployment-75db98fb4c-fcfhn,UID:fb05df81-4f1d-11ea-a994-fa163e34d433,ResourceVersion:21637345,Generation:0,CreationTimestamp:2020-02-14 11:34:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c fafd88d8-4f1d-11ea-a994-fa163e34d433 0xc001edbaf7 0xc001edbaf8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-4hnrt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4hnrt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-4hnrt true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001edbca0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001edbcc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:34:37 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:34:47 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:34:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:34:37 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-14 11:34:37 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-14 11:34:46 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://3d11357991471c9d119170c420ebe60f19d3a8e966f0293351c965cf316a99b9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:34:49.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-6txts" for this suite.
Feb 14 11:34:57.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:34:57.676: INFO: namespace: e2e-tests-deployment-6txts, resource: bindings, ignored listing per whitelist
Feb 14 11:34:57.761: INFO: namespace e2e-tests-deployment-6txts deletion completed in 8.457478539s

• [SLOW TEST:30.960 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:34:57.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 14 11:34:57.979: INFO: Creating deployment "test-recreate-deployment"
Feb 14 11:34:57.986: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 14 11:34:58.061: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Feb 14 11:35:00.391: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 14 11:35:00.402: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276898, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276898, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276899, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276898, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 11:35:02.410: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276898, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276898, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276899, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276898, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 11:35:04.434: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276898, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276898, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276899, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276898, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 11:35:06.413: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276898, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276898, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276899, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717276898, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 11:35:08.423: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 14 11:35:08.449: INFO: Updating deployment test-recreate-deployment
Feb 14 11:35:08.449: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 14 11:35:09.183: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-ds9pb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ds9pb/deployments/test-recreate-deployment,UID:0763a738-4f1e-11ea-a994-fa163e34d433,ResourceVersion:21637450,Generation:2,CreationTimestamp:2020-02-14 11:34:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-14 11:35:08 +0000 UTC 2020-02-14 11:35:08 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-14 11:35:09 +0000 UTC 2020-02-14 11:34:58 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb 14 11:35:09.200: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-ds9pb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ds9pb/replicasets/test-recreate-deployment-589c4bfd,UID:0dd5c5bb-4f1e-11ea-a994-fa163e34d433,ResourceVersion:21637449,Generation:1,CreationTimestamp:2020-02-14 11:35:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 0763a738-4f1e-11ea-a994-fa163e34d433 0xc0010734ef 0xc001073500}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 14 11:35:09.201: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 14 11:35:09.201: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-ds9pb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ds9pb/replicasets/test-recreate-deployment-5bf7f65dc,UID:077024bc-4f1e-11ea-a994-fa163e34d433,ResourceVersion:21637439,Generation:2,CreationTimestamp:2020-02-14 11:34:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 0763a738-4f1e-11ea-a994-fa163e34d433 0xc0010735c0 0xc0010735c1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 14 11:35:09.214: INFO: Pod "test-recreate-deployment-589c4bfd-rx5z9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-rx5z9,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-ds9pb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ds9pb/pods/test-recreate-deployment-589c4bfd-rx5z9,UID:0dd74239-4f1e-11ea-a994-fa163e34d433,ResourceVersion:21637452,Generation:0,CreationTimestamp:2020-02-14 11:35:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 0dd5c5bb-4f1e-11ea-a994-fa163e34d433 0xc001655faf 0xc001655fc0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-xcjb8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xcjb8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-xcjb8 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0017cc020} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0017cc870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:35:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:35:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:35:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 11:35:08 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-14 11:35:08 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:35:09.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-ds9pb" for this suite.
Feb 14 11:35:16.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:35:16.264: INFO: namespace: e2e-tests-deployment-ds9pb, resource: bindings, ignored listing per whitelist
Feb 14 11:35:16.391: INFO: namespace e2e-tests-deployment-ds9pb deletion completed in 7.167790199s

• [SLOW TEST:18.629 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:35:16.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-1280a02e-4f1e-11ea-af88-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 14 11:35:16.655: INFO: Waiting up to 5m0s for pod "pod-configmaps-12827022-4f1e-11ea-af88-0242ac110007" in namespace "e2e-tests-configmap-qhhm4" to be "success or failure"
Feb 14 11:35:16.697: INFO: Pod "pod-configmaps-12827022-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 41.268322ms
Feb 14 11:35:18.943: INFO: Pod "pod-configmaps-12827022-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.287430838s
Feb 14 11:35:20.967: INFO: Pod "pod-configmaps-12827022-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311032212s
Feb 14 11:35:23.403: INFO: Pod "pod-configmaps-12827022-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.74765456s
Feb 14 11:35:25.413: INFO: Pod "pod-configmaps-12827022-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.75768423s
Feb 14 11:35:27.457: INFO: Pod "pod-configmaps-12827022-4f1e-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.801743945s
STEP: Saw pod success
Feb 14 11:35:27.457: INFO: Pod "pod-configmaps-12827022-4f1e-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:35:27.472: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-12827022-4f1e-11ea-af88-0242ac110007 container configmap-volume-test: 
STEP: delete the pod
Feb 14 11:35:27.569: INFO: Waiting for pod pod-configmaps-12827022-4f1e-11ea-af88-0242ac110007 to disappear
Feb 14 11:35:27.648: INFO: Pod pod-configmaps-12827022-4f1e-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:35:27.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-qhhm4" for this suite.
Feb 14 11:35:33.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:35:33.812: INFO: namespace: e2e-tests-configmap-qhhm4, resource: bindings, ignored listing per whitelist
Feb 14 11:35:33.952: INFO: namespace e2e-tests-configmap-qhhm4 deletion completed in 6.291699915s

• [SLOW TEST:17.561 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:35:33.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 14 11:35:34.419: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1d19c329-4f1e-11ea-af88-0242ac110007" in namespace "e2e-tests-downward-api-jxthx" to be "success or failure"
Feb 14 11:35:34.459: INFO: Pod "downwardapi-volume-1d19c329-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 39.757776ms
Feb 14 11:35:36.480: INFO: Pod "downwardapi-volume-1d19c329-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060494252s
Feb 14 11:35:38.509: INFO: Pod "downwardapi-volume-1d19c329-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089812768s
Feb 14 11:35:40.866: INFO: Pod "downwardapi-volume-1d19c329-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.447261056s
Feb 14 11:35:42.924: INFO: Pod "downwardapi-volume-1d19c329-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.50530951s
Feb 14 11:35:44.962: INFO: Pod "downwardapi-volume-1d19c329-4f1e-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.542398571s
STEP: Saw pod success
Feb 14 11:35:44.962: INFO: Pod "downwardapi-volume-1d19c329-4f1e-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:35:44.982: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-1d19c329-4f1e-11ea-af88-0242ac110007 container client-container: 
STEP: delete the pod
Feb 14 11:35:45.432: INFO: Waiting for pod downwardapi-volume-1d19c329-4f1e-11ea-af88-0242ac110007 to disappear
Feb 14 11:35:45.441: INFO: Pod downwardapi-volume-1d19c329-4f1e-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:35:45.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-jxthx" for this suite.
Feb 14 11:35:51.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:35:51.604: INFO: namespace: e2e-tests-downward-api-jxthx, resource: bindings, ignored listing per whitelist
Feb 14 11:35:51.609: INFO: namespace e2e-tests-downward-api-jxthx deletion completed in 6.161143557s

• [SLOW TEST:17.656 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:35:51.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 14 11:36:16.307: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bwj6t PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 11:36:16.307: INFO: >>> kubeConfig: /root/.kube/config
I0214 11:36:16.411969       8 log.go:172] (0xc00241c2c0) (0xc0018cb720) Create stream
I0214 11:36:16.412163       8 log.go:172] (0xc00241c2c0) (0xc0018cb720) Stream added, broadcasting: 1
I0214 11:36:16.425613       8 log.go:172] (0xc00241c2c0) Reply frame received for 1
I0214 11:36:16.425721       8 log.go:172] (0xc00241c2c0) (0xc00181aa00) Create stream
I0214 11:36:16.425741       8 log.go:172] (0xc00241c2c0) (0xc00181aa00) Stream added, broadcasting: 3
I0214 11:36:16.427178       8 log.go:172] (0xc00241c2c0) Reply frame received for 3
I0214 11:36:16.427228       8 log.go:172] (0xc00241c2c0) (0xc0018cb7c0) Create stream
I0214 11:36:16.427248       8 log.go:172] (0xc00241c2c0) (0xc0018cb7c0) Stream added, broadcasting: 5
I0214 11:36:16.428744       8 log.go:172] (0xc00241c2c0) Reply frame received for 5
I0214 11:36:16.702371       8 log.go:172] (0xc00241c2c0) Data frame received for 3
I0214 11:36:16.702502       8 log.go:172] (0xc00181aa00) (3) Data frame handling
I0214 11:36:16.702587       8 log.go:172] (0xc00181aa00) (3) Data frame sent
I0214 11:36:16.940445       8 log.go:172] (0xc00241c2c0) (0xc00181aa00) Stream removed, broadcasting: 3
I0214 11:36:16.940588       8 log.go:172] (0xc00241c2c0) Data frame received for 1
I0214 11:36:16.940617       8 log.go:172] (0xc0018cb720) (1) Data frame handling
I0214 11:36:16.940652       8 log.go:172] (0xc0018cb720) (1) Data frame sent
I0214 11:36:16.940685       8 log.go:172] (0xc00241c2c0) (0xc0018cb720) Stream removed, broadcasting: 1
I0214 11:36:16.940741       8 log.go:172] (0xc00241c2c0) (0xc0018cb7c0) Stream removed, broadcasting: 5
I0214 11:36:16.940800       8 log.go:172] (0xc00241c2c0) Go away received
I0214 11:36:16.940984       8 log.go:172] (0xc00241c2c0) (0xc0018cb720) Stream removed, broadcasting: 1
I0214 11:36:16.940997       8 log.go:172] (0xc00241c2c0) (0xc00181aa00) Stream removed, broadcasting: 3
I0214 11:36:16.941011       8 log.go:172] (0xc00241c2c0) (0xc0018cb7c0) Stream removed, broadcasting: 5
Feb 14 11:36:16.941: INFO: Exec stderr: ""
Feb 14 11:36:16.941: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bwj6t PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 11:36:16.941: INFO: >>> kubeConfig: /root/.kube/config
I0214 11:36:17.039879       8 log.go:172] (0xc0024282c0) (0xc001649720) Create stream
I0214 11:36:17.039981       8 log.go:172] (0xc0024282c0) (0xc001649720) Stream added, broadcasting: 1
I0214 11:36:17.071629       8 log.go:172] (0xc0024282c0) Reply frame received for 1
I0214 11:36:17.072017       8 log.go:172] (0xc0024282c0) (0xc00181a000) Create stream
I0214 11:36:17.072065       8 log.go:172] (0xc0024282c0) (0xc00181a000) Stream added, broadcasting: 3
I0214 11:36:17.076441       8 log.go:172] (0xc0024282c0) Reply frame received for 3
I0214 11:36:17.076622       8 log.go:172] (0xc0024282c0) (0xc00181a0a0) Create stream
I0214 11:36:17.076645       8 log.go:172] (0xc0024282c0) (0xc00181a0a0) Stream added, broadcasting: 5
I0214 11:36:17.079935       8 log.go:172] (0xc0024282c0) Reply frame received for 5
I0214 11:36:17.265114       8 log.go:172] (0xc0024282c0) Data frame received for 3
I0214 11:36:17.265188       8 log.go:172] (0xc00181a000) (3) Data frame handling
I0214 11:36:17.265226       8 log.go:172] (0xc00181a000) (3) Data frame sent
I0214 11:36:17.425108       8 log.go:172] (0xc0024282c0) Data frame received for 1
I0214 11:36:17.425265       8 log.go:172] (0xc0024282c0) (0xc00181a000) Stream removed, broadcasting: 3
I0214 11:36:17.425382       8 log.go:172] (0xc001649720) (1) Data frame handling
I0214 11:36:17.425405       8 log.go:172] (0xc001649720) (1) Data frame sent
I0214 11:36:17.425450       8 log.go:172] (0xc0024282c0) (0xc00181a0a0) Stream removed, broadcasting: 5
I0214 11:36:17.425489       8 log.go:172] (0xc0024282c0) (0xc001649720) Stream removed, broadcasting: 1
I0214 11:36:17.425517       8 log.go:172] (0xc0024282c0) Go away received
I0214 11:36:17.425748       8 log.go:172] (0xc0024282c0) (0xc001649720) Stream removed, broadcasting: 1
I0214 11:36:17.425757       8 log.go:172] (0xc0024282c0) (0xc00181a000) Stream removed, broadcasting: 3
I0214 11:36:17.425818       8 log.go:172] (0xc0024282c0) (0xc00181a0a0) Stream removed, broadcasting: 5
Feb 14 11:36:17.425: INFO: Exec stderr: ""
Feb 14 11:36:17.425: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bwj6t PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 11:36:17.426: INFO: >>> kubeConfig: /root/.kube/config
I0214 11:36:17.508330       8 log.go:172] (0xc00023fce0) (0xc00181a320) Create stream
I0214 11:36:17.508531       8 log.go:172] (0xc00023fce0) (0xc00181a320) Stream added, broadcasting: 1
I0214 11:36:17.516063       8 log.go:172] (0xc00023fce0) Reply frame received for 1
I0214 11:36:17.516156       8 log.go:172] (0xc00023fce0) (0xc002546000) Create stream
I0214 11:36:17.516214       8 log.go:172] (0xc00023fce0) (0xc002546000) Stream added, broadcasting: 3
I0214 11:36:17.517605       8 log.go:172] (0xc00023fce0) Reply frame received for 3
I0214 11:36:17.517648       8 log.go:172] (0xc00023fce0) (0xc001f80000) Create stream
I0214 11:36:17.517663       8 log.go:172] (0xc00023fce0) (0xc001f80000) Stream added, broadcasting: 5
I0214 11:36:17.518790       8 log.go:172] (0xc00023fce0) Reply frame received for 5
I0214 11:36:17.634064       8 log.go:172] (0xc00023fce0) Data frame received for 3
I0214 11:36:17.634132       8 log.go:172] (0xc002546000) (3) Data frame handling
I0214 11:36:17.634161       8 log.go:172] (0xc002546000) (3) Data frame sent
I0214 11:36:17.744388       8 log.go:172] (0xc00023fce0) (0xc001f80000) Stream removed, broadcasting: 5
I0214 11:36:17.744574       8 log.go:172] (0xc00023fce0) Data frame received for 1
I0214 11:36:17.744586       8 log.go:172] (0xc00181a320) (1) Data frame handling
I0214 11:36:17.744613       8 log.go:172] (0xc00181a320) (1) Data frame sent
I0214 11:36:17.744651       8 log.go:172] (0xc00023fce0) (0xc00181a320) Stream removed, broadcasting: 1
I0214 11:36:17.744805       8 log.go:172] (0xc00023fce0) (0xc002546000) Stream removed, broadcasting: 3
I0214 11:36:17.744828       8 log.go:172] (0xc00023fce0) Go away received
I0214 11:36:17.745062       8 log.go:172] (0xc00023fce0) (0xc00181a320) Stream removed, broadcasting: 1
I0214 11:36:17.745086       8 log.go:172] (0xc00023fce0) (0xc002546000) Stream removed, broadcasting: 3
I0214 11:36:17.745107       8 log.go:172] (0xc00023fce0) (0xc001f80000) Stream removed, broadcasting: 5
Feb 14 11:36:17.745: INFO: Exec stderr: ""
Feb 14 11:36:17.745: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bwj6t PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 11:36:17.745: INFO: >>> kubeConfig: /root/.kube/config
I0214 11:36:17.815056       8 log.go:172] (0xc002428210) (0xc00181a640) Create stream
I0214 11:36:17.815183       8 log.go:172] (0xc002428210) (0xc00181a640) Stream added, broadcasting: 1
I0214 11:36:17.819609       8 log.go:172] (0xc002428210) Reply frame received for 1
I0214 11:36:17.819680       8 log.go:172] (0xc002428210) (0xc0014ac0a0) Create stream
I0214 11:36:17.819711       8 log.go:172] (0xc002428210) (0xc0014ac0a0) Stream added, broadcasting: 3
I0214 11:36:17.822469       8 log.go:172] (0xc002428210) Reply frame received for 3
I0214 11:36:17.822501       8 log.go:172] (0xc002428210) (0xc001f800a0) Create stream
I0214 11:36:17.822516       8 log.go:172] (0xc002428210) (0xc001f800a0) Stream added, broadcasting: 5
I0214 11:36:17.823335       8 log.go:172] (0xc002428210) Reply frame received for 5
I0214 11:36:17.921059       8 log.go:172] (0xc002428210) Data frame received for 3
I0214 11:36:17.921146       8 log.go:172] (0xc0014ac0a0) (3) Data frame handling
I0214 11:36:17.921159       8 log.go:172] (0xc0014ac0a0) (3) Data frame sent
I0214 11:36:18.052670       8 log.go:172] (0xc002428210) Data frame received for 1
I0214 11:36:18.052771       8 log.go:172] (0xc002428210) (0xc0014ac0a0) Stream removed, broadcasting: 3
I0214 11:36:18.052837       8 log.go:172] (0xc00181a640) (1) Data frame handling
I0214 11:36:18.052858       8 log.go:172] (0xc00181a640) (1) Data frame sent
I0214 11:36:18.052944       8 log.go:172] (0xc002428210) (0xc001f800a0) Stream removed, broadcasting: 5
I0214 11:36:18.052972       8 log.go:172] (0xc002428210) (0xc00181a640) Stream removed, broadcasting: 1
I0214 11:36:18.052984       8 log.go:172] (0xc002428210) Go away received
I0214 11:36:18.053455       8 log.go:172] (0xc002428210) (0xc00181a640) Stream removed, broadcasting: 1
I0214 11:36:18.053467       8 log.go:172] (0xc002428210) (0xc0014ac0a0) Stream removed, broadcasting: 3
I0214 11:36:18.053472       8 log.go:172] (0xc002428210) (0xc001f800a0) Stream removed, broadcasting: 5
Feb 14 11:36:18.053: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 14 11:36:18.053: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bwj6t PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 11:36:18.053: INFO: >>> kubeConfig: /root/.kube/config
I0214 11:36:18.110575       8 log.go:172] (0xc0006658c0) (0xc0014ac3c0) Create stream
I0214 11:36:18.110663       8 log.go:172] (0xc0006658c0) (0xc0014ac3c0) Stream added, broadcasting: 1
I0214 11:36:18.120029       8 log.go:172] (0xc0006658c0) Reply frame received for 1
I0214 11:36:18.120108       8 log.go:172] (0xc0006658c0) (0xc001f80140) Create stream
I0214 11:36:18.120120       8 log.go:172] (0xc0006658c0) (0xc001f80140) Stream added, broadcasting: 3
I0214 11:36:18.121000       8 log.go:172] (0xc0006658c0) Reply frame received for 3
I0214 11:36:18.121024       8 log.go:172] (0xc0006658c0) (0xc001f94000) Create stream
I0214 11:36:18.121030       8 log.go:172] (0xc0006658c0) (0xc001f94000) Stream added, broadcasting: 5
I0214 11:36:18.121796       8 log.go:172] (0xc0006658c0) Reply frame received for 5
I0214 11:36:18.275755       8 log.go:172] (0xc0006658c0) Data frame received for 3
I0214 11:36:18.275825       8 log.go:172] (0xc001f80140) (3) Data frame handling
I0214 11:36:18.275860       8 log.go:172] (0xc001f80140) (3) Data frame sent
I0214 11:36:18.391789       8 log.go:172] (0xc0006658c0) (0xc001f80140) Stream removed, broadcasting: 3
I0214 11:36:18.391876       8 log.go:172] (0xc0006658c0) Data frame received for 1
I0214 11:36:18.391901       8 log.go:172] (0xc0014ac3c0) (1) Data frame handling
I0214 11:36:18.391916       8 log.go:172] (0xc0014ac3c0) (1) Data frame sent
I0214 11:36:18.391941       8 log.go:172] (0xc0006658c0) (0xc001f94000) Stream removed, broadcasting: 5
I0214 11:36:18.391973       8 log.go:172] (0xc0006658c0) (0xc0014ac3c0) Stream removed, broadcasting: 1
I0214 11:36:18.392003       8 log.go:172] (0xc0006658c0) Go away received
I0214 11:36:18.392214       8 log.go:172] (0xc0006658c0) (0xc0014ac3c0) Stream removed, broadcasting: 1
I0214 11:36:18.392236       8 log.go:172] (0xc0006658c0) (0xc001f80140) Stream removed, broadcasting: 3
I0214 11:36:18.392249       8 log.go:172] (0xc0006658c0) (0xc001f94000) Stream removed, broadcasting: 5
Feb 14 11:36:18.392: INFO: Exec stderr: ""
Feb 14 11:36:18.392: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bwj6t PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 11:36:18.392: INFO: >>> kubeConfig: /root/.kube/config
I0214 11:36:18.462034       8 log.go:172] (0xc0024fa4d0) (0xc001f94280) Create stream
I0214 11:36:18.462080       8 log.go:172] (0xc0024fa4d0) (0xc001f94280) Stream added, broadcasting: 1
I0214 11:36:18.465981       8 log.go:172] (0xc0024fa4d0) Reply frame received for 1
I0214 11:36:18.466029       8 log.go:172] (0xc0024fa4d0) (0xc0025460a0) Create stream
I0214 11:36:18.466054       8 log.go:172] (0xc0024fa4d0) (0xc0025460a0) Stream added, broadcasting: 3
I0214 11:36:18.467359       8 log.go:172] (0xc0024fa4d0) Reply frame received for 3
I0214 11:36:18.467398       8 log.go:172] (0xc0024fa4d0) (0xc001f80320) Create stream
I0214 11:36:18.467409       8 log.go:172] (0xc0024fa4d0) (0xc001f80320) Stream added, broadcasting: 5
I0214 11:36:18.469280       8 log.go:172] (0xc0024fa4d0) Reply frame received for 5
I0214 11:36:18.723333       8 log.go:172] (0xc0024fa4d0) Data frame received for 3
I0214 11:36:18.723443       8 log.go:172] (0xc0025460a0) (3) Data frame handling
I0214 11:36:18.723465       8 log.go:172] (0xc0025460a0) (3) Data frame sent
I0214 11:36:18.832081       8 log.go:172] (0xc0024fa4d0) (0xc0025460a0) Stream removed, broadcasting: 3
I0214 11:36:18.832228       8 log.go:172] (0xc0024fa4d0) Data frame received for 1
I0214 11:36:18.832248       8 log.go:172] (0xc001f94280) (1) Data frame handling
I0214 11:36:18.832274       8 log.go:172] (0xc001f94280) (1) Data frame sent
I0214 11:36:18.832289       8 log.go:172] (0xc0024fa4d0) (0xc001f94280) Stream removed, broadcasting: 1
I0214 11:36:18.833921       8 log.go:172] (0xc0024fa4d0) (0xc001f80320) Stream removed, broadcasting: 5
I0214 11:36:18.833985       8 log.go:172] (0xc0024fa4d0) Go away received
I0214 11:36:18.834042       8 log.go:172] (0xc0024fa4d0) (0xc001f94280) Stream removed, broadcasting: 1
I0214 11:36:18.834055       8 log.go:172] (0xc0024fa4d0) (0xc0025460a0) Stream removed, broadcasting: 3
I0214 11:36:18.834073       8 log.go:172] (0xc0024fa4d0) (0xc001f80320) Stream removed, broadcasting: 5
Feb 14 11:36:18.834: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 14 11:36:18.834: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bwj6t PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 11:36:18.834: INFO: >>> kubeConfig: /root/.kube/config
I0214 11:36:18.900485       8 log.go:172] (0xc0024fa9a0) (0xc001f94460) Create stream
I0214 11:36:18.900558       8 log.go:172] (0xc0024fa9a0) (0xc001f94460) Stream added, broadcasting: 1
I0214 11:36:18.905083       8 log.go:172] (0xc0024fa9a0) Reply frame received for 1
I0214 11:36:18.905119       8 log.go:172] (0xc0024fa9a0) (0xc0014ac460) Create stream
I0214 11:36:18.905129       8 log.go:172] (0xc0024fa9a0) (0xc0014ac460) Stream added, broadcasting: 3
I0214 11:36:18.905988       8 log.go:172] (0xc0024fa9a0) Reply frame received for 3
I0214 11:36:18.906008       8 log.go:172] (0xc0024fa9a0) (0xc0025461e0) Create stream
I0214 11:36:18.906018       8 log.go:172] (0xc0024fa9a0) (0xc0025461e0) Stream added, broadcasting: 5
I0214 11:36:18.906878       8 log.go:172] (0xc0024fa9a0) Reply frame received for 5
I0214 11:36:19.045791       8 log.go:172] (0xc0024fa9a0) Data frame received for 3
I0214 11:36:19.045866       8 log.go:172] (0xc0014ac460) (3) Data frame handling
I0214 11:36:19.045903       8 log.go:172] (0xc0014ac460) (3) Data frame sent
I0214 11:36:19.143723       8 log.go:172] (0xc0024fa9a0) Data frame received for 1
I0214 11:36:19.143800       8 log.go:172] (0xc0024fa9a0) (0xc0014ac460) Stream removed, broadcasting: 3
I0214 11:36:19.143859       8 log.go:172] (0xc001f94460) (1) Data frame handling
I0214 11:36:19.143876       8 log.go:172] (0xc001f94460) (1) Data frame sent
I0214 11:36:19.143896       8 log.go:172] (0xc0024fa9a0) (0xc0025461e0) Stream removed, broadcasting: 5
I0214 11:36:19.143925       8 log.go:172] (0xc0024fa9a0) (0xc001f94460) Stream removed, broadcasting: 1
I0214 11:36:19.143943       8 log.go:172] (0xc0024fa9a0) Go away received
I0214 11:36:19.144168       8 log.go:172] (0xc0024fa9a0) (0xc001f94460) Stream removed, broadcasting: 1
I0214 11:36:19.144182       8 log.go:172] (0xc0024fa9a0) (0xc0014ac460) Stream removed, broadcasting: 3
I0214 11:36:19.144189       8 log.go:172] (0xc0024fa9a0) (0xc0025461e0) Stream removed, broadcasting: 5
Feb 14 11:36:19.144: INFO: Exec stderr: ""
Feb 14 11:36:19.144: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bwj6t PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 11:36:19.144: INFO: >>> kubeConfig: /root/.kube/config
I0214 11:36:19.217338       8 log.go:172] (0xc0024fae70) (0xc001f946e0) Create stream
I0214 11:36:19.217375       8 log.go:172] (0xc0024fae70) (0xc001f946e0) Stream added, broadcasting: 1
I0214 11:36:19.220816       8 log.go:172] (0xc0024fae70) Reply frame received for 1
I0214 11:36:19.220860       8 log.go:172] (0xc0024fae70) (0xc0014ac500) Create stream
I0214 11:36:19.220874       8 log.go:172] (0xc0024fae70) (0xc0014ac500) Stream added, broadcasting: 3
I0214 11:36:19.221716       8 log.go:172] (0xc0024fae70) Reply frame received for 3
I0214 11:36:19.221735       8 log.go:172] (0xc0024fae70) (0xc00181a6e0) Create stream
I0214 11:36:19.221742       8 log.go:172] (0xc0024fae70) (0xc00181a6e0) Stream added, broadcasting: 5
I0214 11:36:19.222531       8 log.go:172] (0xc0024fae70) Reply frame received for 5
I0214 11:36:19.329372       8 log.go:172] (0xc0024fae70) Data frame received for 3
I0214 11:36:19.329499       8 log.go:172] (0xc0014ac500) (3) Data frame handling
I0214 11:36:19.329542       8 log.go:172] (0xc0014ac500) (3) Data frame sent
I0214 11:36:19.454053       8 log.go:172] (0xc0024fae70) Data frame received for 1
I0214 11:36:19.454134       8 log.go:172] (0xc001f946e0) (1) Data frame handling
I0214 11:36:19.454152       8 log.go:172] (0xc001f946e0) (1) Data frame sent
I0214 11:36:19.455278       8 log.go:172] (0xc0024fae70) (0xc001f946e0) Stream removed, broadcasting: 1
I0214 11:36:19.456957       8 log.go:172] (0xc0024fae70) (0xc0014ac500) Stream removed, broadcasting: 3
I0214 11:36:19.458685       8 log.go:172] (0xc0024fae70) (0xc00181a6e0) Stream removed, broadcasting: 5
I0214 11:36:19.458769       8 log.go:172] (0xc0024fae70) (0xc001f946e0) Stream removed, broadcasting: 1
I0214 11:36:19.458780       8 log.go:172] (0xc0024fae70) (0xc0014ac500) Stream removed, broadcasting: 3
I0214 11:36:19.458788       8 log.go:172] (0xc0024fae70) (0xc00181a6e0) Stream removed, broadcasting: 5
Feb 14 11:36:19.459: INFO: Exec stderr: ""
Feb 14 11:36:19.459: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bwj6t PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 11:36:19.459: INFO: >>> kubeConfig: /root/.kube/config
I0214 11:36:19.525523       8 log.go:172] (0xc000665d90) (0xc0014ac780) Create stream
I0214 11:36:19.525629       8 log.go:172] (0xc000665d90) (0xc0014ac780) Stream added, broadcasting: 1
I0214 11:36:19.534965       8 log.go:172] (0xc000665d90) Reply frame received for 1
I0214 11:36:19.535051       8 log.go:172] (0xc000665d90) (0xc002546280) Create stream
I0214 11:36:19.535061       8 log.go:172] (0xc000665d90) (0xc002546280) Stream added, broadcasting: 3
I0214 11:36:19.537044       8 log.go:172] (0xc000665d90) Reply frame received for 3
I0214 11:36:19.537131       8 log.go:172] (0xc000665d90) (0xc001f803c0) Create stream
I0214 11:36:19.537228       8 log.go:172] (0xc000665d90) (0xc001f803c0) Stream added, broadcasting: 5
I0214 11:36:19.538317       8 log.go:172] (0xc000665d90) Reply frame received for 5
I0214 11:36:19.620972       8 log.go:172] (0xc000665d90) Data frame received for 3
I0214 11:36:19.621053       8 log.go:172] (0xc002546280) (3) Data frame handling
I0214 11:36:19.621078       8 log.go:172] (0xc002546280) (3) Data frame sent
I0214 11:36:19.730957       8 log.go:172] (0xc000665d90) Data frame received for 1
I0214 11:36:19.731140       8 log.go:172] (0xc000665d90) (0xc002546280) Stream removed, broadcasting: 3
I0214 11:36:19.731244       8 log.go:172] (0xc0014ac780) (1) Data frame handling
I0214 11:36:19.731277       8 log.go:172] (0xc0014ac780) (1) Data frame sent
I0214 11:36:19.731336       8 log.go:172] (0xc000665d90) (0xc001f803c0) Stream removed, broadcasting: 5
I0214 11:36:19.731496       8 log.go:172] (0xc000665d90) (0xc0014ac780) Stream removed, broadcasting: 1
I0214 11:36:19.731727       8 log.go:172] (0xc000665d90) Go away received
I0214 11:36:19.732435       8 log.go:172] (0xc000665d90) (0xc0014ac780) Stream removed, broadcasting: 1
I0214 11:36:19.732491       8 log.go:172] (0xc000665d90) (0xc002546280) Stream removed, broadcasting: 3
I0214 11:36:19.732515       8 log.go:172] (0xc000665d90) (0xc001f803c0) Stream removed, broadcasting: 5
Feb 14 11:36:19.732: INFO: Exec stderr: ""
Feb 14 11:36:19.732: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-bwj6t PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 11:36:19.732: INFO: >>> kubeConfig: /root/.kube/config
I0214 11:36:19.848721       8 log.go:172] (0xc000dd02c0) (0xc001f80640) Create stream
I0214 11:36:19.849175       8 log.go:172] (0xc000dd02c0) (0xc001f80640) Stream added, broadcasting: 1
I0214 11:36:19.858509       8 log.go:172] (0xc000dd02c0) Reply frame received for 1
I0214 11:36:19.858610       8 log.go:172] (0xc000dd02c0) (0xc00181a780) Create stream
I0214 11:36:19.858629       8 log.go:172] (0xc000dd02c0) (0xc00181a780) Stream added, broadcasting: 3
I0214 11:36:19.860953       8 log.go:172] (0xc000dd02c0) Reply frame received for 3
I0214 11:36:19.861006       8 log.go:172] (0xc000dd02c0) (0xc002546320) Create stream
I0214 11:36:19.861019       8 log.go:172] (0xc000dd02c0) (0xc002546320) Stream added, broadcasting: 5
I0214 11:36:19.862154       8 log.go:172] (0xc000dd02c0) Reply frame received for 5
I0214 11:36:19.965848       8 log.go:172] (0xc000dd02c0) Data frame received for 3
I0214 11:36:19.965927       8 log.go:172] (0xc00181a780) (3) Data frame handling
I0214 11:36:19.965975       8 log.go:172] (0xc00181a780) (3) Data frame sent
I0214 11:36:20.079653       8 log.go:172] (0xc000dd02c0) Data frame received for 1
I0214 11:36:20.079758       8 log.go:172] (0xc001f80640) (1) Data frame handling
I0214 11:36:20.079783       8 log.go:172] (0xc001f80640) (1) Data frame sent
I0214 11:36:20.079804       8 log.go:172] (0xc000dd02c0) (0xc001f80640) Stream removed, broadcasting: 1
I0214 11:36:20.080365       8 log.go:172] (0xc000dd02c0) (0xc00181a780) Stream removed, broadcasting: 3
I0214 11:36:20.082704       8 log.go:172] (0xc000dd02c0) (0xc002546320) Stream removed, broadcasting: 5
I0214 11:36:20.082803       8 log.go:172] (0xc000dd02c0) (0xc001f80640) Stream removed, broadcasting: 1
I0214 11:36:20.082813       8 log.go:172] (0xc000dd02c0) (0xc00181a780) Stream removed, broadcasting: 3
I0214 11:36:20.082826       8 log.go:172] (0xc000dd02c0) (0xc002546320) Stream removed, broadcasting: 5
I0214 11:36:20.083016       8 log.go:172] (0xc000dd02c0) Go away received
Feb 14 11:36:20.083: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:36:20.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-bwj6t" for this suite.
Feb 14 11:37:06.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:37:06.237: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-bwj6t, resource: bindings, ignored listing per whitelist
Feb 14 11:37:06.346: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-bwj6t deletion completed in 46.249656522s

• [SLOW TEST:74.737 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:37:06.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 14 11:37:06.655: INFO: Waiting up to 5m0s for pod "pod-540df760-4f1e-11ea-af88-0242ac110007" in namespace "e2e-tests-emptydir-lchsm" to be "success or failure"
Feb 14 11:37:06.698: INFO: Pod "pod-540df760-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 43.03966ms
Feb 14 11:37:08.713: INFO: Pod "pod-540df760-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058362154s
Feb 14 11:37:10.730: INFO: Pod "pod-540df760-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074886435s
Feb 14 11:37:12.973: INFO: Pod "pod-540df760-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.318299295s
Feb 14 11:37:14.989: INFO: Pod "pod-540df760-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.333766719s
Feb 14 11:37:17.004: INFO: Pod "pod-540df760-4f1e-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.34940332s
STEP: Saw pod success
Feb 14 11:37:17.005: INFO: Pod "pod-540df760-4f1e-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:37:17.010: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-540df760-4f1e-11ea-af88-0242ac110007 container test-container: 
STEP: delete the pod
Feb 14 11:37:17.066: INFO: Waiting for pod pod-540df760-4f1e-11ea-af88-0242ac110007 to disappear
Feb 14 11:37:17.211: INFO: Pod pod-540df760-4f1e-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:37:17.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-lchsm" for this suite.
Feb 14 11:37:23.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:37:23.368: INFO: namespace: e2e-tests-emptydir-lchsm, resource: bindings, ignored listing per whitelist
Feb 14 11:37:23.408: INFO: namespace e2e-tests-emptydir-lchsm deletion completed in 6.184127408s

• [SLOW TEST:17.062 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:37:23.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 14 11:40:28.193: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 11:40:28.221: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 11:40:30.221: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 11:40:30.236: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 11:40:32.221: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 11:40:32.232: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 11:40:34.221: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 11:40:34.317: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 11:40:36.221: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 11:40:36.234: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 11:40:38.222: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 11:40:38.274: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 11:40:40.221: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 11:40:40.235: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 11:40:42.221: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 11:40:42.241: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 11:40:44.221: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 11:40:44.237: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 11:40:46.221: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 11:40:46.233: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 11:40:48.221: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 11:40:48.246: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 11:40:50.221: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 11:40:50.238: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 11:40:52.221: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 11:40:52.235: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 11:40:54.221: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 11:40:54.234: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 11:40:56.221: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 11:40:56.240: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 11:40:58.221: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 11:40:58.234: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 11:41:00.221: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 11:41:00.266: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 11:41:02.221: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 11:41:02.240: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 11:41:04.221: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 11:41:04.238: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:41:04.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-9qnpz" for this suite.
Feb 14 11:41:28.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:41:28.478: INFO: namespace: e2e-tests-container-lifecycle-hook-9qnpz, resource: bindings, ignored listing per whitelist
Feb 14 11:41:28.509: INFO: namespace e2e-tests-container-lifecycle-hook-9qnpz deletion completed in 24.262651951s

• [SLOW TEST:245.101 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:41:28.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-f057d666-4f1e-11ea-af88-0242ac110007
STEP: Creating a pod to test consume secrets
Feb 14 11:41:28.844: INFO: Waiting up to 5m0s for pod "pod-secrets-f059884f-4f1e-11ea-af88-0242ac110007" in namespace "e2e-tests-secrets-bdqt2" to be "success or failure"
Feb 14 11:41:28.893: INFO: Pod "pod-secrets-f059884f-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 48.973112ms
Feb 14 11:41:30.904: INFO: Pod "pod-secrets-f059884f-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060233624s
Feb 14 11:41:32.916: INFO: Pod "pod-secrets-f059884f-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072345673s
Feb 14 11:41:34.950: INFO: Pod "pod-secrets-f059884f-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105756202s
Feb 14 11:41:36.998: INFO: Pod "pod-secrets-f059884f-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.153746246s
Feb 14 11:41:39.011: INFO: Pod "pod-secrets-f059884f-4f1e-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.166785605s
STEP: Saw pod success
Feb 14 11:41:39.011: INFO: Pod "pod-secrets-f059884f-4f1e-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:41:39.015: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-f059884f-4f1e-11ea-af88-0242ac110007 container secret-volume-test: 
STEP: delete the pod
Feb 14 11:41:40.460: INFO: Waiting for pod pod-secrets-f059884f-4f1e-11ea-af88-0242ac110007 to disappear
Feb 14 11:41:40.650: INFO: Pod pod-secrets-f059884f-4f1e-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:41:40.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-bdqt2" for this suite.
Feb 14 11:41:46.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:41:46.924: INFO: namespace: e2e-tests-secrets-bdqt2, resource: bindings, ignored listing per whitelist
Feb 14 11:41:47.077: INFO: namespace e2e-tests-secrets-bdqt2 deletion completed in 6.404421883s

• [SLOW TEST:18.567 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:41:47.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 14 11:41:47.237: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fb519219-4f1e-11ea-af88-0242ac110007" in namespace "e2e-tests-downward-api-hdxss" to be "success or failure"
Feb 14 11:41:47.319: INFO: Pod "downwardapi-volume-fb519219-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 82.102096ms
Feb 14 11:41:49.341: INFO: Pod "downwardapi-volume-fb519219-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104328084s
Feb 14 11:41:51.431: INFO: Pod "downwardapi-volume-fb519219-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194314553s
Feb 14 11:41:53.823: INFO: Pod "downwardapi-volume-fb519219-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.585478184s
Feb 14 11:41:56.403: INFO: Pod "downwardapi-volume-fb519219-4f1e-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.165827988s
Feb 14 11:41:58.420: INFO: Pod "downwardapi-volume-fb519219-4f1e-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.183001828s
STEP: Saw pod success
Feb 14 11:41:58.420: INFO: Pod "downwardapi-volume-fb519219-4f1e-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:41:58.433: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-fb519219-4f1e-11ea-af88-0242ac110007 container client-container: 
STEP: delete the pod
Feb 14 11:41:58.888: INFO: Waiting for pod downwardapi-volume-fb519219-4f1e-11ea-af88-0242ac110007 to disappear
Feb 14 11:41:58.893: INFO: Pod downwardapi-volume-fb519219-4f1e-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:41:58.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-hdxss" for this suite.
Feb 14 11:42:04.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:42:05.026: INFO: namespace: e2e-tests-downward-api-hdxss, resource: bindings, ignored listing per whitelist
Feb 14 11:42:05.108: INFO: namespace e2e-tests-downward-api-hdxss deletion completed in 6.202239249s

• [SLOW TEST:18.031 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:42:05.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Feb 14 11:42:05.415: INFO: Waiting up to 5m0s for pod "var-expansion-06252408-4f1f-11ea-af88-0242ac110007" in namespace "e2e-tests-var-expansion-wbkvm" to be "success or failure"
Feb 14 11:42:05.445: INFO: Pod "var-expansion-06252408-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 29.81445ms
Feb 14 11:42:07.458: INFO: Pod "var-expansion-06252408-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043413805s
Feb 14 11:42:09.478: INFO: Pod "var-expansion-06252408-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063236372s
Feb 14 11:42:11.497: INFO: Pod "var-expansion-06252408-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081917742s
Feb 14 11:42:13.907: INFO: Pod "var-expansion-06252408-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.49208067s
Feb 14 11:42:15.920: INFO: Pod "var-expansion-06252408-4f1f-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.504811644s
STEP: Saw pod success
Feb 14 11:42:15.920: INFO: Pod "var-expansion-06252408-4f1f-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:42:15.925: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-06252408-4f1f-11ea-af88-0242ac110007 container dapi-container: 
STEP: delete the pod
Feb 14 11:42:16.065: INFO: Waiting for pod var-expansion-06252408-4f1f-11ea-af88-0242ac110007 to disappear
Feb 14 11:42:16.460: INFO: Pod var-expansion-06252408-4f1f-11ea-af88-0242ac110007 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:42:16.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-wbkvm" for this suite.
Feb 14 11:42:22.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:42:22.732: INFO: namespace: e2e-tests-var-expansion-wbkvm, resource: bindings, ignored listing per whitelist
Feb 14 11:42:22.850: INFO: namespace e2e-tests-var-expansion-wbkvm deletion completed in 6.330641203s

• [SLOW TEST:17.742 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:42:22.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-10a6af10-4f1f-11ea-af88-0242ac110007
STEP: Creating a pod to test consume secrets
Feb 14 11:42:23.039: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-10a76710-4f1f-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-5l7xv" to be "success or failure"
Feb 14 11:42:23.175: INFO: Pod "pod-projected-secrets-10a76710-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 135.671127ms
Feb 14 11:42:25.767: INFO: Pod "pod-projected-secrets-10a76710-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.728064662s
Feb 14 11:42:27.779: INFO: Pod "pod-projected-secrets-10a76710-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.739352366s
Feb 14 11:42:30.147: INFO: Pod "pod-projected-secrets-10a76710-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.108199824s
Feb 14 11:42:32.220: INFO: Pod "pod-projected-secrets-10a76710-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.181192371s
Feb 14 11:42:34.231: INFO: Pod "pod-projected-secrets-10a76710-4f1f-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.192248161s
STEP: Saw pod success
Feb 14 11:42:34.232: INFO: Pod "pod-projected-secrets-10a76710-4f1f-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:42:34.254: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-10a76710-4f1f-11ea-af88-0242ac110007 container projected-secret-volume-test: 
STEP: delete the pod
Feb 14 11:42:34.909: INFO: Waiting for pod pod-projected-secrets-10a76710-4f1f-11ea-af88-0242ac110007 to disappear
Feb 14 11:42:34.948: INFO: Pod pod-projected-secrets-10a76710-4f1f-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:42:34.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-5l7xv" for this suite.
Feb 14 11:42:41.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:42:41.392: INFO: namespace: e2e-tests-projected-5l7xv, resource: bindings, ignored listing per whitelist
Feb 14 11:42:41.406: INFO: namespace e2e-tests-projected-5l7xv deletion completed in 6.417819123s

• [SLOW TEST:18.555 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:42:41.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:42:51.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-qvpdx" for this suite.
Feb 14 11:43:45.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:43:45.882: INFO: namespace: e2e-tests-kubelet-test-qvpdx, resource: bindings, ignored listing per whitelist
Feb 14 11:43:45.891: INFO: namespace e2e-tests-kubelet-test-qvpdx deletion completed in 54.176683217s

• [SLOW TEST:64.485 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:43:45.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-p274d
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-p274d to expose endpoints map[]
Feb 14 11:43:46.232: INFO: Get endpoints failed (77.679154ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb 14 11:43:47.256: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-p274d exposes endpoints map[] (1.101579343s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-p274d
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-p274d to expose endpoints map[pod1:[100]]
Feb 14 11:43:51.436: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.159318256s elapsed, will retry)
Feb 14 11:43:56.618: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-p274d exposes endpoints map[pod1:[100]] (9.341391714s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-p274d
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-p274d to expose endpoints map[pod2:[101] pod1:[100]]
Feb 14 11:44:01.100: INFO: Unexpected endpoints: found map[42de1f0a-4f1f-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (4.47174113s elapsed, will retry)
Feb 14 11:44:04.688: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-p274d exposes endpoints map[pod1:[100] pod2:[101]] (8.060274122s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-p274d
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-p274d to expose endpoints map[pod2:[101]]
Feb 14 11:44:05.748: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-p274d exposes endpoints map[pod2:[101]] (1.052652823s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-p274d
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-p274d to expose endpoints map[]
Feb 14 11:44:07.467: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-p274d exposes endpoints map[] (1.691741117s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:44:07.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-p274d" for this suite.
Feb 14 11:44:15.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:44:15.888: INFO: namespace: e2e-tests-services-p274d, resource: bindings, ignored listing per whitelist
Feb 14 11:44:15.911: INFO: namespace e2e-tests-services-p274d deletion completed in 8.196788739s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:30.020 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:44:15.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb 14 11:44:16.327: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-t7fw2,SelfLink:/api/v1/namespaces/e2e-tests-watch-t7fw2/configmaps/e2e-watch-test-watch-closed,UID:542d8a65-4f1f-11ea-a994-fa163e34d433,ResourceVersion:21638480,Generation:0,CreationTimestamp:2020-02-14 11:44:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 14 11:44:16.328: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-t7fw2,SelfLink:/api/v1/namespaces/e2e-tests-watch-t7fw2/configmaps/e2e-watch-test-watch-closed,UID:542d8a65-4f1f-11ea-a994-fa163e34d433,ResourceVersion:21638481,Generation:0,CreationTimestamp:2020-02-14 11:44:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb 14 11:44:16.369: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-t7fw2,SelfLink:/api/v1/namespaces/e2e-tests-watch-t7fw2/configmaps/e2e-watch-test-watch-closed,UID:542d8a65-4f1f-11ea-a994-fa163e34d433,ResourceVersion:21638482,Generation:0,CreationTimestamp:2020-02-14 11:44:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 14 11:44:16.370: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-t7fw2,SelfLink:/api/v1/namespaces/e2e-tests-watch-t7fw2/configmaps/e2e-watch-test-watch-closed,UID:542d8a65-4f1f-11ea-a994-fa163e34d433,ResourceVersion:21638483,Generation:0,CreationTimestamp:2020-02-14 11:44:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:44:16.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-t7fw2" for this suite.
Feb 14 11:44:22.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:44:22.725: INFO: namespace: e2e-tests-watch-t7fw2, resource: bindings, ignored listing per whitelist
Feb 14 11:44:22.925: INFO: namespace e2e-tests-watch-t7fw2 deletion completed in 6.543855938s

• [SLOW TEST:7.013 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:44:22.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-584e1c71-4f1f-11ea-af88-0242ac110007
STEP: Creating a pod to test consume secrets
Feb 14 11:44:23.274: INFO: Waiting up to 5m0s for pod "pod-secrets-584f9189-4f1f-11ea-af88-0242ac110007" in namespace "e2e-tests-secrets-n676s" to be "success or failure"
Feb 14 11:44:23.284: INFO: Pod "pod-secrets-584f9189-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018118ms
Feb 14 11:44:25.521: INFO: Pod "pod-secrets-584f9189-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.247065208s
Feb 14 11:44:27.534: INFO: Pod "pod-secrets-584f9189-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.260104978s
Feb 14 11:44:30.219: INFO: Pod "pod-secrets-584f9189-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.945484104s
Feb 14 11:44:32.306: INFO: Pod "pod-secrets-584f9189-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.032115897s
Feb 14 11:44:34.320: INFO: Pod "pod-secrets-584f9189-4f1f-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.046618932s
STEP: Saw pod success
Feb 14 11:44:34.321: INFO: Pod "pod-secrets-584f9189-4f1f-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:44:34.327: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-584f9189-4f1f-11ea-af88-0242ac110007 container secret-env-test: 
STEP: delete the pod
Feb 14 11:44:34.418: INFO: Waiting for pod pod-secrets-584f9189-4f1f-11ea-af88-0242ac110007 to disappear
Feb 14 11:44:34.424: INFO: Pod pod-secrets-584f9189-4f1f-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:44:34.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-n676s" for this suite.
Feb 14 11:44:42.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:44:42.534: INFO: namespace: e2e-tests-secrets-n676s, resource: bindings, ignored listing per whitelist
Feb 14 11:44:43.034: INFO: namespace e2e-tests-secrets-n676s deletion completed in 8.604335606s

• [SLOW TEST:20.109 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:44:43.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-644b87d8-4f1f-11ea-af88-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 14 11:44:43.449: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-644dd866-4f1f-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-lqgxt" to be "success or failure"
Feb 14 11:44:43.468: INFO: Pod "pod-projected-configmaps-644dd866-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 18.144118ms
Feb 14 11:44:45.481: INFO: Pod "pod-projected-configmaps-644dd866-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031209734s
Feb 14 11:44:47.499: INFO: Pod "pod-projected-configmaps-644dd866-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049338817s
Feb 14 11:44:49.716: INFO: Pod "pod-projected-configmaps-644dd866-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.266732928s
Feb 14 11:44:51.727: INFO: Pod "pod-projected-configmaps-644dd866-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.277169653s
Feb 14 11:44:53.755: INFO: Pod "pod-projected-configmaps-644dd866-4f1f-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.304944449s
STEP: Saw pod success
Feb 14 11:44:53.755: INFO: Pod "pod-projected-configmaps-644dd866-4f1f-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:44:53.764: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-644dd866-4f1f-11ea-af88-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 14 11:44:53.900: INFO: Waiting for pod pod-projected-configmaps-644dd866-4f1f-11ea-af88-0242ac110007 to disappear
Feb 14 11:44:53.910: INFO: Pod pod-projected-configmaps-644dd866-4f1f-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:44:53.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lqgxt" for this suite.
Feb 14 11:44:59.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:45:00.015: INFO: namespace: e2e-tests-projected-lqgxt, resource: bindings, ignored listing per whitelist
Feb 14 11:45:00.110: INFO: namespace e2e-tests-projected-lqgxt deletion completed in 6.190839866s

• [SLOW TEST:17.076 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:45:00.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb 14 11:45:13.432: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:45:14.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-qmd5r" for this suite.
Feb 14 11:45:41.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:45:41.858: INFO: namespace: e2e-tests-replicaset-qmd5r, resource: bindings, ignored listing per whitelist
Feb 14 11:45:41.899: INFO: namespace e2e-tests-replicaset-qmd5r deletion completed in 27.373786879s

• [SLOW TEST:41.789 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:45:41.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 14 11:45:42.172: INFO: Waiting up to 5m0s for pod "pod-8756ee7f-4f1f-11ea-af88-0242ac110007" in namespace "e2e-tests-emptydir-tthxb" to be "success or failure"
Feb 14 11:45:42.290: INFO: Pod "pod-8756ee7f-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 117.405547ms
Feb 14 11:45:44.792: INFO: Pod "pod-8756ee7f-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.619115725s
Feb 14 11:45:46.999: INFO: Pod "pod-8756ee7f-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.826809782s
Feb 14 11:45:49.012: INFO: Pod "pod-8756ee7f-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.838911891s
Feb 14 11:45:51.039: INFO: Pod "pod-8756ee7f-4f1f-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.866793474s
Feb 14 11:45:53.080: INFO: Pod "pod-8756ee7f-4f1f-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.907683073s
STEP: Saw pod success
Feb 14 11:45:53.081: INFO: Pod "pod-8756ee7f-4f1f-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:45:53.100: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-8756ee7f-4f1f-11ea-af88-0242ac110007 container test-container: 
STEP: delete the pod
Feb 14 11:45:53.271: INFO: Waiting for pod pod-8756ee7f-4f1f-11ea-af88-0242ac110007 to disappear
Feb 14 11:45:53.294: INFO: Pod pod-8756ee7f-4f1f-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:45:53.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-tthxb" for this suite.
Feb 14 11:45:59.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:45:59.504: INFO: namespace: e2e-tests-emptydir-tthxb, resource: bindings, ignored listing per whitelist
Feb 14 11:45:59.558: INFO: namespace e2e-tests-emptydir-tthxb deletion completed in 6.256291291s

• [SLOW TEST:17.658 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:45:59.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 14 11:46:00.789: INFO: Pod name wrapped-volume-race-92675724-4f1f-11ea-af88-0242ac110007: Found 0 pods out of 5
Feb 14 11:46:05.821: INFO: Pod name wrapped-volume-race-92675724-4f1f-11ea-af88-0242ac110007: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-92675724-4f1f-11ea-af88-0242ac110007 in namespace e2e-tests-emptydir-wrapper-k7pwk, will wait for the garbage collector to delete the pods
Feb 14 11:47:48.273: INFO: Deleting ReplicationController wrapped-volume-race-92675724-4f1f-11ea-af88-0242ac110007 took: 18.446993ms
Feb 14 11:47:48.573: INFO: Terminating ReplicationController wrapped-volume-race-92675724-4f1f-11ea-af88-0242ac110007 pods took: 300.55389ms
STEP: Creating RC which spawns configmap-volume pods
Feb 14 11:48:33.671: INFO: Pod name wrapped-volume-race-ed8557c0-4f1f-11ea-af88-0242ac110007: Found 0 pods out of 5
Feb 14 11:48:38.687: INFO: Pod name wrapped-volume-race-ed8557c0-4f1f-11ea-af88-0242ac110007: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-ed8557c0-4f1f-11ea-af88-0242ac110007 in namespace e2e-tests-emptydir-wrapper-k7pwk, will wait for the garbage collector to delete the pods
Feb 14 11:51:12.878: INFO: Deleting ReplicationController wrapped-volume-race-ed8557c0-4f1f-11ea-af88-0242ac110007 took: 29.450544ms
Feb 14 11:51:13.078: INFO: Terminating ReplicationController wrapped-volume-race-ed8557c0-4f1f-11ea-af88-0242ac110007 pods took: 200.943402ms
STEP: Creating RC which spawns configmap-volume pods
Feb 14 11:51:57.538: INFO: Pod name wrapped-volume-race-66f06726-4f20-11ea-af88-0242ac110007: Found 0 pods out of 5
Feb 14 11:52:02.785: INFO: Pod name wrapped-volume-race-66f06726-4f20-11ea-af88-0242ac110007: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-66f06726-4f20-11ea-af88-0242ac110007 in namespace e2e-tests-emptydir-wrapper-k7pwk, will wait for the garbage collector to delete the pods
Feb 14 11:54:38.895: INFO: Deleting ReplicationController wrapped-volume-race-66f06726-4f20-11ea-af88-0242ac110007 took: 13.726792ms
Feb 14 11:54:39.295: INFO: Terminating ReplicationController wrapped-volume-race-66f06726-4f20-11ea-af88-0242ac110007 pods took: 400.569694ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:55:35.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-k7pwk" for this suite.
Feb 14 11:55:43.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:55:43.383: INFO: namespace: e2e-tests-emptydir-wrapper-k7pwk, resource: bindings, ignored listing per whitelist
Feb 14 11:55:43.401: INFO: namespace e2e-tests-emptydir-wrapper-k7pwk deletion completed in 8.366982936s

• [SLOW TEST:583.843 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:55:43.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-8htsm.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-8htsm.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-8htsm.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-8htsm.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-8htsm.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-8htsm.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 14 11:56:05.891: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-8htsm/dns-test-eddb3904-4f20-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-eddb3904-4f20-11ea-af88-0242ac110007)
Feb 14 11:56:05.898: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-8htsm/dns-test-eddb3904-4f20-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-eddb3904-4f20-11ea-af88-0242ac110007)
Feb 14 11:56:05.909: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-8htsm/dns-test-eddb3904-4f20-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-eddb3904-4f20-11ea-af88-0242ac110007)
Feb 14 11:56:05.917: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-8htsm/dns-test-eddb3904-4f20-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-eddb3904-4f20-11ea-af88-0242ac110007)
Feb 14 11:56:05.922: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-8htsm/dns-test-eddb3904-4f20-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-eddb3904-4f20-11ea-af88-0242ac110007)
Feb 14 11:56:05.925: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-8htsm/dns-test-eddb3904-4f20-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-eddb3904-4f20-11ea-af88-0242ac110007)
Feb 14 11:56:05.931: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-8htsm.svc.cluster.local from pod e2e-tests-dns-8htsm/dns-test-eddb3904-4f20-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-eddb3904-4f20-11ea-af88-0242ac110007)
Feb 14 11:56:05.937: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-8htsm/dns-test-eddb3904-4f20-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-eddb3904-4f20-11ea-af88-0242ac110007)
Feb 14 11:56:05.943: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-8htsm/dns-test-eddb3904-4f20-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-eddb3904-4f20-11ea-af88-0242ac110007)
Feb 14 11:56:05.952: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-8htsm/dns-test-eddb3904-4f20-11ea-af88-0242ac110007: the server could not find the requested resource (get pods dns-test-eddb3904-4f20-11ea-af88-0242ac110007)
Feb 14 11:56:05.952: INFO: Lookups using e2e-tests-dns-8htsm/dns-test-eddb3904-4f20-11ea-af88-0242ac110007 failed for: [jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-8htsm.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 14 11:56:11.109: INFO: DNS probes using e2e-tests-dns-8htsm/dns-test-eddb3904-4f20-11ea-af88-0242ac110007 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:56:11.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-8htsm" for this suite.
Feb 14 11:56:19.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:56:19.557: INFO: namespace: e2e-tests-dns-8htsm, resource: bindings, ignored listing per whitelist
Feb 14 11:56:19.561: INFO: namespace e2e-tests-dns-8htsm deletion completed in 8.317098395s

• [SLOW TEST:36.159 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:56:19.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 14 11:56:19.889: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0371e1ec-4f21-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-qwmpg" to be "success or failure"
Feb 14 11:56:19.909: INFO: Pod "downwardapi-volume-0371e1ec-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 19.582681ms
Feb 14 11:56:21.967: INFO: Pod "downwardapi-volume-0371e1ec-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077525535s
Feb 14 11:56:23.985: INFO: Pod "downwardapi-volume-0371e1ec-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095440128s
Feb 14 11:56:26.351: INFO: Pod "downwardapi-volume-0371e1ec-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.461895912s
Feb 14 11:56:28.374: INFO: Pod "downwardapi-volume-0371e1ec-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.485176228s
Feb 14 11:56:30.482: INFO: Pod "downwardapi-volume-0371e1ec-4f21-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.592521613s
STEP: Saw pod success
Feb 14 11:56:30.482: INFO: Pod "downwardapi-volume-0371e1ec-4f21-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:56:30.512: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0371e1ec-4f21-11ea-af88-0242ac110007 container client-container: 
STEP: delete the pod
Feb 14 11:56:30.840: INFO: Waiting for pod downwardapi-volume-0371e1ec-4f21-11ea-af88-0242ac110007 to disappear
Feb 14 11:56:30.867: INFO: Pod downwardapi-volume-0371e1ec-4f21-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:56:30.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qwmpg" for this suite.
Feb 14 11:56:36.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:56:36.975: INFO: namespace: e2e-tests-projected-qwmpg, resource: bindings, ignored listing per whitelist
Feb 14 11:56:37.011: INFO: namespace e2e-tests-projected-qwmpg deletion completed in 6.130677863s

• [SLOW TEST:17.450 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:56:37.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-0dc8e400-4f21-11ea-af88-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 14 11:56:37.219: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0dc9d391-4f21-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-cnmrf" to be "success or failure"
Feb 14 11:56:37.230: INFO: Pod "pod-projected-configmaps-0dc9d391-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.261509ms
Feb 14 11:56:39.382: INFO: Pod "pod-projected-configmaps-0dc9d391-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162293062s
Feb 14 11:56:41.398: INFO: Pod "pod-projected-configmaps-0dc9d391-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178291031s
Feb 14 11:56:43.480: INFO: Pod "pod-projected-configmaps-0dc9d391-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.261052801s
Feb 14 11:56:46.116: INFO: Pod "pod-projected-configmaps-0dc9d391-4f21-11ea-af88-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 8.897070281s
Feb 14 11:56:48.130: INFO: Pod "pod-projected-configmaps-0dc9d391-4f21-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.911050758s
STEP: Saw pod success
Feb 14 11:56:48.131: INFO: Pod "pod-projected-configmaps-0dc9d391-4f21-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:56:48.139: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-0dc9d391-4f21-11ea-af88-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 14 11:56:48.770: INFO: Waiting for pod pod-projected-configmaps-0dc9d391-4f21-11ea-af88-0242ac110007 to disappear
Feb 14 11:56:48.786: INFO: Pod pod-projected-configmaps-0dc9d391-4f21-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:56:48.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cnmrf" for this suite.
Feb 14 11:56:54.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:56:54.890: INFO: namespace: e2e-tests-projected-cnmrf, resource: bindings, ignored listing per whitelist
Feb 14 11:56:55.025: INFO: namespace e2e-tests-projected-cnmrf deletion completed in 6.230318207s

• [SLOW TEST:18.014 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:56:55.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 14 11:56:55.266: INFO: Waiting up to 5m0s for pod "downwardapi-volume-188a87fe-4f21-11ea-af88-0242ac110007" in namespace "e2e-tests-downward-api-bmm75" to be "success or failure"
Feb 14 11:56:55.412: INFO: Pod "downwardapi-volume-188a87fe-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 145.172062ms
Feb 14 11:56:57.919: INFO: Pod "downwardapi-volume-188a87fe-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.652563588s
Feb 14 11:56:59.946: INFO: Pod "downwardapi-volume-188a87fe-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.679392557s
Feb 14 11:57:02.001: INFO: Pod "downwardapi-volume-188a87fe-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.734616076s
Feb 14 11:57:04.016: INFO: Pod "downwardapi-volume-188a87fe-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.749550889s
Feb 14 11:57:06.071: INFO: Pod "downwardapi-volume-188a87fe-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.804903576s
Feb 14 11:57:08.086: INFO: Pod "downwardapi-volume-188a87fe-4f21-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.81915417s
STEP: Saw pod success
Feb 14 11:57:08.086: INFO: Pod "downwardapi-volume-188a87fe-4f21-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:57:08.092: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-188a87fe-4f21-11ea-af88-0242ac110007 container client-container: 
STEP: delete the pod
Feb 14 11:57:08.156: INFO: Waiting for pod downwardapi-volume-188a87fe-4f21-11ea-af88-0242ac110007 to disappear
Feb 14 11:57:08.176: INFO: Pod downwardapi-volume-188a87fe-4f21-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:57:08.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-bmm75" for this suite.
Feb 14 11:57:14.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:57:14.343: INFO: namespace: e2e-tests-downward-api-bmm75, resource: bindings, ignored listing per whitelist
Feb 14 11:57:14.414: INFO: namespace e2e-tests-downward-api-bmm75 deletion completed in 6.228156597s

• [SLOW TEST:19.388 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:57:14.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-241c90c2-4f21-11ea-af88-0242ac110007
STEP: Creating a pod to test consume secrets
Feb 14 11:57:14.738: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-241d5825-4f21-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-k2p4k" to be "success or failure"
Feb 14 11:57:14.748: INFO: Pod "pod-projected-secrets-241d5825-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.689146ms
Feb 14 11:57:17.357: INFO: Pod "pod-projected-secrets-241d5825-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.618120255s
Feb 14 11:57:19.388: INFO: Pod "pod-projected-secrets-241d5825-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.649264435s
Feb 14 11:57:21.796: INFO: Pod "pod-projected-secrets-241d5825-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.057349866s
Feb 14 11:57:23.834: INFO: Pod "pod-projected-secrets-241d5825-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.09508927s
Feb 14 11:57:25.858: INFO: Pod "pod-projected-secrets-241d5825-4f21-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.119469064s
STEP: Saw pod success
Feb 14 11:57:25.858: INFO: Pod "pod-projected-secrets-241d5825-4f21-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:57:25.880: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-241d5825-4f21-11ea-af88-0242ac110007 container projected-secret-volume-test: 
STEP: delete the pod
Feb 14 11:57:27.005: INFO: Waiting for pod pod-projected-secrets-241d5825-4f21-11ea-af88-0242ac110007 to disappear
Feb 14 11:57:27.021: INFO: Pod pod-projected-secrets-241d5825-4f21-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:57:27.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-k2p4k" for this suite.
Feb 14 11:57:33.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:57:33.188: INFO: namespace: e2e-tests-projected-k2p4k, resource: bindings, ignored listing per whitelist
Feb 14 11:57:33.264: INFO: namespace e2e-tests-projected-k2p4k deletion completed in 6.170932172s

• [SLOW TEST:18.850 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:57:33.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-2f4b194d-4f21-11ea-af88-0242ac110007
STEP: Creating a pod to test consume secrets
Feb 14 11:57:33.546: INFO: Waiting up to 5m0s for pod "pod-secrets-2f59e3a0-4f21-11ea-af88-0242ac110007" in namespace "e2e-tests-secrets-6kx46" to be "success or failure"
Feb 14 11:57:33.556: INFO: Pod "pod-secrets-2f59e3a0-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.303441ms
Feb 14 11:57:36.040: INFO: Pod "pod-secrets-2f59e3a0-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.493128616s
Feb 14 11:57:38.052: INFO: Pod "pod-secrets-2f59e3a0-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.505779782s
Feb 14 11:57:40.063: INFO: Pod "pod-secrets-2f59e3a0-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.51596423s
Feb 14 11:57:42.189: INFO: Pod "pod-secrets-2f59e3a0-4f21-11ea-af88-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 8.642329431s
Feb 14 11:57:44.210: INFO: Pod "pod-secrets-2f59e3a0-4f21-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.663695983s
STEP: Saw pod success
Feb 14 11:57:44.211: INFO: Pod "pod-secrets-2f59e3a0-4f21-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 11:57:44.217: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-2f59e3a0-4f21-11ea-af88-0242ac110007 container secret-volume-test: 
STEP: delete the pod
Feb 14 11:57:44.405: INFO: Waiting for pod pod-secrets-2f59e3a0-4f21-11ea-af88-0242ac110007 to disappear
Feb 14 11:57:44.483: INFO: Pod pod-secrets-2f59e3a0-4f21-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:57:44.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-6kx46" for this suite.
Feb 14 11:57:50.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:57:50.949: INFO: namespace: e2e-tests-secrets-6kx46, resource: bindings, ignored listing per whitelist
Feb 14 11:57:51.055: INFO: namespace e2e-tests-secrets-6kx46 deletion completed in 6.523102281s

• [SLOW TEST:17.790 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:57:51.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 14 11:57:51.623: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"3a04be89-4f21-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001e226c2), BlockOwnerDeletion:(*bool)(0xc001e226c3)}}
Feb 14 11:57:51.688: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"39fee9d7-4f21-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001eb09aa), BlockOwnerDeletion:(*bool)(0xc001eb09ab)}}
Feb 14 11:57:51.804: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"3a0140f1-4f21-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001e228ea), BlockOwnerDeletion:(*bool)(0xc001e228eb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:57:56.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-7tf2l" for this suite.
Feb 14 11:58:05.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:58:05.414: INFO: namespace: e2e-tests-gc-7tf2l, resource: bindings, ignored listing per whitelist
Feb 14 11:58:05.432: INFO: namespace e2e-tests-gc-7tf2l deletion completed in 8.548338242s

• [SLOW TEST:14.376 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:58:05.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 11:58:12.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-68wcx" for this suite.
Feb 14 11:58:18.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:58:18.157: INFO: namespace: e2e-tests-namespaces-68wcx, resource: bindings, ignored listing per whitelist
Feb 14 11:58:18.461: INFO: namespace e2e-tests-namespaces-68wcx deletion completed in 6.421983918s
STEP: Destroying namespace "e2e-tests-nsdeletetest-pv2p6" for this suite.
Feb 14 11:58:18.468: INFO: Namespace e2e-tests-nsdeletetest-pv2p6 was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-b65zz" for this suite.
Feb 14 11:58:24.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 11:58:24.567: INFO: namespace: e2e-tests-nsdeletetest-b65zz, resource: bindings, ignored listing per whitelist
Feb 14 11:58:24.607: INFO: namespace e2e-tests-nsdeletetest-b65zz deletion completed in 6.138540793s

• [SLOW TEST:19.174 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 11:58:24.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-z2thm
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-z2thm
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-z2thm
Feb 14 11:58:24.960: INFO: Found 0 stateful pods, waiting for 1
Feb 14 11:58:34.981: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Feb 14 11:58:45.009: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 14 11:58:45.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-z2thm ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 14 11:58:45.711: INFO: stderr: "I0214 11:58:45.249115    2379 log.go:172] (0xc0001380b0) (0xc0007125a0) Create stream\nI0214 11:58:45.249712    2379 log.go:172] (0xc0001380b0) (0xc0007125a0) Stream added, broadcasting: 1\nI0214 11:58:45.256287    2379 log.go:172] (0xc0001380b0) Reply frame received for 1\nI0214 11:58:45.256369    2379 log.go:172] (0xc0001380b0) (0xc0007b4e60) Create stream\nI0214 11:58:45.256386    2379 log.go:172] (0xc0001380b0) (0xc0007b4e60) Stream added, broadcasting: 3\nI0214 11:58:45.257403    2379 log.go:172] (0xc0001380b0) Reply frame received for 3\nI0214 11:58:45.257430    2379 log.go:172] (0xc0001380b0) (0xc000712640) Create stream\nI0214 11:58:45.257437    2379 log.go:172] (0xc0001380b0) (0xc000712640) Stream added, broadcasting: 5\nI0214 11:58:45.259311    2379 log.go:172] (0xc0001380b0) Reply frame received for 5\nI0214 11:58:45.536187    2379 log.go:172] (0xc0001380b0) Data frame received for 3\nI0214 11:58:45.536308    2379 log.go:172] (0xc0007b4e60) (3) Data frame handling\nI0214 11:58:45.536342    2379 log.go:172] (0xc0007b4e60) (3) Data frame sent\nI0214 11:58:45.695442    2379 log.go:172] (0xc0001380b0) Data frame received for 1\nI0214 11:58:45.695712    2379 log.go:172] (0xc0001380b0) (0xc0007b4e60) Stream removed, broadcasting: 3\nI0214 11:58:45.695900    2379 log.go:172] (0xc0007125a0) (1) Data frame handling\nI0214 11:58:45.696026    2379 log.go:172] (0xc0001380b0) (0xc000712640) Stream removed, broadcasting: 5\nI0214 11:58:45.696103    2379 log.go:172] (0xc0007125a0) (1) Data frame sent\nI0214 11:58:45.696144    2379 log.go:172] (0xc0001380b0) (0xc0007125a0) Stream removed, broadcasting: 1\nI0214 11:58:45.696223    2379 log.go:172] (0xc0001380b0) Go away received\nI0214 11:58:45.696712    2379 log.go:172] (0xc0001380b0) (0xc0007125a0) Stream removed, broadcasting: 1\nI0214 11:58:45.696767    2379 log.go:172] (0xc0001380b0) (0xc0007b4e60) Stream removed, broadcasting: 3\nI0214 11:58:45.696810    2379 log.go:172] (0xc0001380b0) (0xc000712640) Stream removed, broadcasting: 5\n"
Feb 14 11:58:45.712: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 14 11:58:45.712: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 14 11:58:45.721: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 14 11:58:55.765: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 14 11:58:55.765: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 11:58:55.808: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999679s
Feb 14 11:58:56.830: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.9886693s
Feb 14 11:58:57.895: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.966781052s
Feb 14 11:58:58.918: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.900832705s
Feb 14 11:58:59.936: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.878528008s
Feb 14 11:59:00.952: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.859840989s
Feb 14 11:59:01.969: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.84480825s
Feb 14 11:59:02.982: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.827072293s
Feb 14 11:59:04.048: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.814472934s
Feb 14 11:59:05.069: INFO: Verifying statefulset ss doesn't scale past 1 for another 747.95762ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-z2thm
Feb 14 11:59:06.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-z2thm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:59:06.951: INFO: stderr: "I0214 11:59:06.413175    2401 log.go:172] (0xc000746370) (0xc000772640) Create stream\nI0214 11:59:06.413851    2401 log.go:172] (0xc000746370) (0xc000772640) Stream added, broadcasting: 1\nI0214 11:59:06.424431    2401 log.go:172] (0xc000746370) Reply frame received for 1\nI0214 11:59:06.424650    2401 log.go:172] (0xc000746370) (0xc00064af00) Create stream\nI0214 11:59:06.424680    2401 log.go:172] (0xc000746370) (0xc00064af00) Stream added, broadcasting: 3\nI0214 11:59:06.436352    2401 log.go:172] (0xc000746370) Reply frame received for 3\nI0214 11:59:06.436420    2401 log.go:172] (0xc000746370) (0xc000610000) Create stream\nI0214 11:59:06.436444    2401 log.go:172] (0xc000746370) (0xc000610000) Stream added, broadcasting: 5\nI0214 11:59:06.444488    2401 log.go:172] (0xc000746370) Reply frame received for 5\nI0214 11:59:06.782693    2401 log.go:172] (0xc000746370) Data frame received for 3\nI0214 11:59:06.782867    2401 log.go:172] (0xc00064af00) (3) Data frame handling\nI0214 11:59:06.782897    2401 log.go:172] (0xc00064af00) (3) Data frame sent\nI0214 11:59:06.934433    2401 log.go:172] (0xc000746370) Data frame received for 1\nI0214 11:59:06.934948    2401 log.go:172] (0xc000746370) (0xc00064af00) Stream removed, broadcasting: 3\nI0214 11:59:06.935053    2401 log.go:172] (0xc000772640) (1) Data frame handling\nI0214 11:59:06.935085    2401 log.go:172] (0xc000772640) (1) Data frame sent\nI0214 11:59:06.935207    2401 log.go:172] (0xc000746370) (0xc000610000) Stream removed, broadcasting: 5\nI0214 11:59:06.935312    2401 log.go:172] (0xc000746370) (0xc000772640) Stream removed, broadcasting: 1\nI0214 11:59:06.935353    2401 log.go:172] (0xc000746370) Go away received\nI0214 11:59:06.936392    2401 log.go:172] (0xc000746370) (0xc000772640) Stream removed, broadcasting: 1\nI0214 11:59:06.936450    2401 log.go:172] (0xc000746370) (0xc00064af00) Stream removed, broadcasting: 3\nI0214 11:59:06.936493    2401 log.go:172] (0xc000746370) (0xc000610000) Stream removed, broadcasting: 5\n"
Feb 14 11:59:06.952: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 14 11:59:06.952: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 14 11:59:06.961: INFO: Found 1 stateful pods, waiting for 3
Feb 14 11:59:17.012: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 11:59:17.012: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 11:59:17.012: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 14 11:59:26.987: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 11:59:26.987: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 11:59:26.987: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 14 11:59:27.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-z2thm ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 14 11:59:27.766: INFO: stderr: "I0214 11:59:27.245129    2423 log.go:172] (0xc0007b2160) (0xc00060a000) Create stream\nI0214 11:59:27.245312    2423 log.go:172] (0xc0007b2160) (0xc00060a000) Stream added, broadcasting: 1\nI0214 11:59:27.252993    2423 log.go:172] (0xc0007b2160) Reply frame received for 1\nI0214 11:59:27.253099    2423 log.go:172] (0xc0007b2160) (0xc00060a640) Create stream\nI0214 11:59:27.253107    2423 log.go:172] (0xc0007b2160) (0xc00060a640) Stream added, broadcasting: 3\nI0214 11:59:27.254823    2423 log.go:172] (0xc0007b2160) Reply frame received for 3\nI0214 11:59:27.254861    2423 log.go:172] (0xc0007b2160) (0xc0002fcd20) Create stream\nI0214 11:59:27.254870    2423 log.go:172] (0xc0007b2160) (0xc0002fcd20) Stream added, broadcasting: 5\nI0214 11:59:27.256593    2423 log.go:172] (0xc0007b2160) Reply frame received for 5\nI0214 11:59:27.386837    2423 log.go:172] (0xc0007b2160) Data frame received for 3\nI0214 11:59:27.387061    2423 log.go:172] (0xc00060a640) (3) Data frame handling\nI0214 11:59:27.387123    2423 log.go:172] (0xc00060a640) (3) Data frame sent\nI0214 11:59:27.753682    2423 log.go:172] (0xc0007b2160) Data frame received for 1\nI0214 11:59:27.754103    2423 log.go:172] (0xc0007b2160) (0xc0002fcd20) Stream removed, broadcasting: 5\nI0214 11:59:27.754186    2423 log.go:172] (0xc00060a000) (1) Data frame handling\nI0214 11:59:27.754218    2423 log.go:172] (0xc00060a000) (1) Data frame sent\nI0214 11:59:27.754354    2423 log.go:172] (0xc0007b2160) (0xc00060a640) Stream removed, broadcasting: 3\nI0214 11:59:27.754426    2423 log.go:172] (0xc0007b2160) (0xc00060a000) Stream removed, broadcasting: 1\nI0214 11:59:27.754454    2423 log.go:172] (0xc0007b2160) Go away received\nI0214 11:59:27.755750    2423 log.go:172] (0xc0007b2160) (0xc00060a000) Stream removed, broadcasting: 1\nI0214 11:59:27.755786    2423 log.go:172] (0xc0007b2160) (0xc00060a640) Stream removed, broadcasting: 3\nI0214 11:59:27.755799    2423 log.go:172] (0xc0007b2160) (0xc0002fcd20) Stream removed, broadcasting: 5\n"
Feb 14 11:59:27.767: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 14 11:59:27.767: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 14 11:59:27.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-z2thm ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 14 11:59:28.913: INFO: stderr: "I0214 11:59:28.055224    2445 log.go:172] (0xc00014c840) (0xc00065f360) Create stream\nI0214 11:59:28.055846    2445 log.go:172] (0xc00014c840) (0xc00065f360) Stream added, broadcasting: 1\nI0214 11:59:28.094863    2445 log.go:172] (0xc00014c840) Reply frame received for 1\nI0214 11:59:28.095227    2445 log.go:172] (0xc00014c840) (0xc0005ea000) Create stream\nI0214 11:59:28.095261    2445 log.go:172] (0xc00014c840) (0xc0005ea000) Stream added, broadcasting: 3\nI0214 11:59:28.098321    2445 log.go:172] (0xc00014c840) Reply frame received for 3\nI0214 11:59:28.098371    2445 log.go:172] (0xc00014c840) (0xc00065f400) Create stream\nI0214 11:59:28.098387    2445 log.go:172] (0xc00014c840) (0xc00065f400) Stream added, broadcasting: 5\nI0214 11:59:28.111414    2445 log.go:172] (0xc00014c840) Reply frame received for 5\nI0214 11:59:28.535774    2445 log.go:172] (0xc00014c840) Data frame received for 3\nI0214 11:59:28.536057    2445 log.go:172] (0xc0005ea000) (3) Data frame handling\nI0214 11:59:28.536266    2445 log.go:172] (0xc0005ea000) (3) Data frame sent\nI0214 11:59:28.899689    2445 log.go:172] (0xc00014c840) (0xc0005ea000) Stream removed, broadcasting: 3\nI0214 11:59:28.900085    2445 log.go:172] (0xc00014c840) Data frame received for 1\nI0214 11:59:28.900420    2445 log.go:172] (0xc00014c840) (0xc00065f400) Stream removed, broadcasting: 5\nI0214 11:59:28.900467    2445 log.go:172] (0xc00065f360) (1) Data frame handling\nI0214 11:59:28.900516    2445 log.go:172] (0xc00065f360) (1) Data frame sent\nI0214 11:59:28.900532    2445 log.go:172] (0xc00014c840) (0xc00065f360) Stream removed, broadcasting: 1\nI0214 11:59:28.900551    2445 log.go:172] (0xc00014c840) Go away received\nI0214 11:59:28.901496    2445 log.go:172] (0xc00014c840) (0xc00065f360) Stream removed, broadcasting: 1\nI0214 11:59:28.901519    2445 log.go:172] (0xc00014c840) (0xc0005ea000) Stream removed, broadcasting: 3\nI0214 11:59:28.901547    2445 log.go:172] (0xc00014c840) (0xc00065f400) Stream removed, broadcasting: 5\n"
Feb 14 11:59:28.913: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 14 11:59:28.913: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 14 11:59:28.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-z2thm ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 14 11:59:29.522: INFO: stderr: "I0214 11:59:29.180011    2468 log.go:172] (0xc000726370) (0xc00079c640) Create stream\nI0214 11:59:29.180354    2468 log.go:172] (0xc000726370) (0xc00079c640) Stream added, broadcasting: 1\nI0214 11:59:29.194278    2468 log.go:172] (0xc000726370) Reply frame received for 1\nI0214 11:59:29.194409    2468 log.go:172] (0xc000726370) (0xc000684d20) Create stream\nI0214 11:59:29.194447    2468 log.go:172] (0xc000726370) (0xc000684d20) Stream added, broadcasting: 3\nI0214 11:59:29.197623    2468 log.go:172] (0xc000726370) Reply frame received for 3\nI0214 11:59:29.197648    2468 log.go:172] (0xc000726370) (0xc000684e60) Create stream\nI0214 11:59:29.197656    2468 log.go:172] (0xc000726370) (0xc000684e60) Stream added, broadcasting: 5\nI0214 11:59:29.200467    2468 log.go:172] (0xc000726370) Reply frame received for 5\nI0214 11:59:29.401414    2468 log.go:172] (0xc000726370) Data frame received for 3\nI0214 11:59:29.401507    2468 log.go:172] (0xc000684d20) (3) Data frame handling\nI0214 11:59:29.401548    2468 log.go:172] (0xc000684d20) (3) Data frame sent\nI0214 11:59:29.510219    2468 log.go:172] (0xc000726370) Data frame received for 1\nI0214 11:59:29.510333    2468 log.go:172] (0xc00079c640) (1) Data frame handling\nI0214 11:59:29.510385    2468 log.go:172] (0xc00079c640) (1) Data frame sent\nI0214 11:59:29.511013    2468 log.go:172] (0xc000726370) (0xc00079c640) Stream removed, broadcasting: 1\nI0214 11:59:29.511601    2468 log.go:172] (0xc000726370) (0xc000684d20) Stream removed, broadcasting: 3\nI0214 11:59:29.512147    2468 log.go:172] (0xc000726370) (0xc000684e60) Stream removed, broadcasting: 5\nI0214 11:59:29.512227    2468 log.go:172] (0xc000726370) (0xc00079c640) Stream removed, broadcasting: 1\nI0214 11:59:29.512276    2468 log.go:172] (0xc000726370) (0xc000684d20) Stream removed, broadcasting: 3\nI0214 11:59:29.512319    2468 log.go:172] (0xc000726370) (0xc000684e60) Stream removed, broadcasting: 5\nI0214 11:59:29.512732    2468 log.go:172] (0xc000726370) Go away received\n"
Feb 14 11:59:29.523: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 14 11:59:29.523: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 14 11:59:29.523: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 11:59:29.546: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Feb 14 11:59:39.574: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 14 11:59:39.574: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 14 11:59:39.575: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 14 11:59:39.608: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999494s
Feb 14 11:59:40.641: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.982987265s
Feb 14 11:59:41.660: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.949839537s
Feb 14 11:59:42.738: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.93103495s
Feb 14 11:59:43.763: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.853596021s
Feb 14 11:59:44.787: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.827908008s
Feb 14 11:59:45.832: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.804617155s
Feb 14 11:59:46.872: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.759252486s
Feb 14 11:59:47.949: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.719083257s
Feb 14 11:59:48.963: INFO: Verifying statefulset ss doesn't scale past 3 for another 642.191689ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-z2thm
Feb 14 11:59:49.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-z2thm ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:59:50.611: INFO: stderr: "I0214 11:59:50.217699    2490 log.go:172] (0xc0006fe370) (0xc0005e94a0) Create stream\nI0214 11:59:50.217920    2490 log.go:172] (0xc0006fe370) (0xc0005e94a0) Stream added, broadcasting: 1\nI0214 11:59:50.223024    2490 log.go:172] (0xc0006fe370) Reply frame received for 1\nI0214 11:59:50.223100    2490 log.go:172] (0xc0006fe370) (0xc0005e9540) Create stream\nI0214 11:59:50.223116    2490 log.go:172] (0xc0006fe370) (0xc0005e9540) Stream added, broadcasting: 3\nI0214 11:59:50.224320    2490 log.go:172] (0xc0006fe370) Reply frame received for 3\nI0214 11:59:50.224366    2490 log.go:172] (0xc0006fe370) (0xc000360000) Create stream\nI0214 11:59:50.224376    2490 log.go:172] (0xc0006fe370) (0xc000360000) Stream added, broadcasting: 5\nI0214 11:59:50.225198    2490 log.go:172] (0xc0006fe370) Reply frame received for 5\nI0214 11:59:50.322408    2490 log.go:172] (0xc0006fe370) Data frame received for 3\nI0214 11:59:50.322536    2490 log.go:172] (0xc0005e9540) (3) Data frame handling\nI0214 11:59:50.322623    2490 log.go:172] (0xc0005e9540) (3) Data frame sent\nI0214 11:59:50.595182    2490 log.go:172] (0xc0006fe370) Data frame received for 1\nI0214 11:59:50.595483    2490 log.go:172] (0xc0006fe370) (0xc0005e9540) Stream removed, broadcasting: 3\nI0214 11:59:50.595567    2490 log.go:172] (0xc0005e94a0) (1) Data frame handling\nI0214 11:59:50.595596    2490 log.go:172] (0xc0005e94a0) (1) Data frame sent\nI0214 11:59:50.595635    2490 log.go:172] (0xc0006fe370) (0xc000360000) Stream removed, broadcasting: 5\nI0214 11:59:50.595728    2490 log.go:172] (0xc0006fe370) (0xc0005e94a0) Stream removed, broadcasting: 1\nI0214 11:59:50.595780    2490 log.go:172] (0xc0006fe370) Go away received\nI0214 11:59:50.596265    2490 log.go:172] (0xc0006fe370) (0xc0005e94a0) Stream removed, broadcasting: 1\nI0214 11:59:50.596341    2490 log.go:172] (0xc0006fe370) (0xc0005e9540) Stream removed, broadcasting: 3\nI0214 11:59:50.596389    2490 log.go:172] (0xc0006fe370) (0xc000360000) Stream removed, broadcasting: 5\n"
Feb 14 11:59:50.611: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 14 11:59:50.611: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 14 11:59:50.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-z2thm ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:59:51.411: INFO: stderr: "I0214 11:59:50.898524    2512 log.go:172] (0xc0002a24d0) (0xc0006455e0) Create stream\nI0214 11:59:50.898768    2512 log.go:172] (0xc0002a24d0) (0xc0006455e0) Stream added, broadcasting: 1\nI0214 11:59:50.903944    2512 log.go:172] (0xc0002a24d0) Reply frame received for 1\nI0214 11:59:50.904024    2512 log.go:172] (0xc0002a24d0) (0xc000892000) Create stream\nI0214 11:59:50.904036    2512 log.go:172] (0xc0002a24d0) (0xc000892000) Stream added, broadcasting: 3\nI0214 11:59:50.905130    2512 log.go:172] (0xc0002a24d0) Reply frame received for 3\nI0214 11:59:50.905162    2512 log.go:172] (0xc0002a24d0) (0xc0008a6000) Create stream\nI0214 11:59:50.905173    2512 log.go:172] (0xc0002a24d0) (0xc0008a6000) Stream added, broadcasting: 5\nI0214 11:59:50.906170    2512 log.go:172] (0xc0002a24d0) Reply frame received for 5\nI0214 11:59:51.136489    2512 log.go:172] (0xc0002a24d0) Data frame received for 3\nI0214 11:59:51.136661    2512 log.go:172] (0xc000892000) (3) Data frame handling\nI0214 11:59:51.136700    2512 log.go:172] (0xc000892000) (3) Data frame sent\nI0214 11:59:51.401731    2512 log.go:172] (0xc0002a24d0) (0xc0008a6000) Stream removed, broadcasting: 5\nI0214 11:59:51.401839    2512 log.go:172] (0xc0002a24d0) Data frame received for 1\nI0214 11:59:51.401865    2512 log.go:172] (0xc0002a24d0) (0xc000892000) Stream removed, broadcasting: 3\nI0214 11:59:51.401891    2512 log.go:172] (0xc0006455e0) (1) Data frame handling\nI0214 11:59:51.401915    2512 log.go:172] (0xc0006455e0) (1) Data frame sent\nI0214 11:59:51.401928    2512 log.go:172] (0xc0002a24d0) (0xc0006455e0) Stream removed, broadcasting: 1\nI0214 11:59:51.401946    2512 log.go:172] (0xc0002a24d0) Go away received\nI0214 11:59:51.402411    2512 log.go:172] (0xc0002a24d0) (0xc0006455e0) Stream removed, broadcasting: 1\nI0214 11:59:51.402428    2512 log.go:172] (0xc0002a24d0) (0xc000892000) Stream removed, broadcasting: 3\nI0214 11:59:51.402436    2512 log.go:172] (0xc0002a24d0) (0xc0008a6000) Stream removed, broadcasting: 5\n"
Feb 14 11:59:51.412: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 14 11:59:51.412: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 14 11:59:51.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-z2thm ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 14 11:59:51.961: INFO: stderr: "I0214 11:59:51.658273    2533 log.go:172] (0xc000520420) (0xc00072a640) Create stream\nI0214 11:59:51.658519    2533 log.go:172] (0xc000520420) (0xc00072a640) Stream added, broadcasting: 1\nI0214 11:59:51.664142    2533 log.go:172] (0xc000520420) Reply frame received for 1\nI0214 11:59:51.664188    2533 log.go:172] (0xc000520420) (0xc00063ec80) Create stream\nI0214 11:59:51.664232    2533 log.go:172] (0xc000520420) (0xc00063ec80) Stream added, broadcasting: 3\nI0214 11:59:51.665681    2533 log.go:172] (0xc000520420) Reply frame received for 3\nI0214 11:59:51.665747    2533 log.go:172] (0xc000520420) (0xc0003be000) Create stream\nI0214 11:59:51.665788    2533 log.go:172] (0xc000520420) (0xc0003be000) Stream added, broadcasting: 5\nI0214 11:59:51.666859    2533 log.go:172] (0xc000520420) Reply frame received for 5\nI0214 11:59:51.753956    2533 log.go:172] (0xc000520420) Data frame received for 3\nI0214 11:59:51.754068    2533 log.go:172] (0xc00063ec80) (3) Data frame handling\nI0214 11:59:51.754095    2533 log.go:172] (0xc00063ec80) (3) Data frame sent\nI0214 11:59:51.946758    2533 log.go:172] (0xc000520420) Data frame received for 1\nI0214 11:59:51.946902    2533 log.go:172] (0xc00072a640) (1) Data frame handling\nI0214 11:59:51.946935    2533 log.go:172] (0xc00072a640) (1) Data frame sent\nI0214 11:59:51.948916    2533 log.go:172] (0xc000520420) (0xc00072a640) Stream removed, broadcasting: 1\nI0214 11:59:51.950207    2533 log.go:172] (0xc000520420) (0xc00063ec80) Stream removed, broadcasting: 3\nI0214 11:59:51.951422    2533 log.go:172] (0xc000520420) (0xc0003be000) Stream removed, broadcasting: 5\nI0214 11:59:51.951474    2533 log.go:172] (0xc000520420) (0xc00072a640) Stream removed, broadcasting: 1\nI0214 11:59:51.951484    2533 log.go:172] (0xc000520420) (0xc00063ec80) Stream removed, broadcasting: 3\nI0214 11:59:51.951491    2533 log.go:172] (0xc000520420) (0xc0003be000) Stream removed, broadcasting: 5\n"
Feb 14 11:59:51.961: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 14 11:59:51.961: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 14 11:59:51.962: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Feb 14 12:00:12.490: INFO: Deleting all statefulset in ns e2e-tests-statefulset-z2thm
Feb 14 12:00:12.506: INFO: Scaling statefulset ss to 0
Feb 14 12:00:12.539: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 12:00:12.546: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:00:12.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-z2thm" for this suite.
Feb 14 12:00:20.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:00:20.789: INFO: namespace: e2e-tests-statefulset-z2thm, resource: bindings, ignored listing per whitelist
Feb 14 12:00:20.869: INFO: namespace e2e-tests-statefulset-z2thm deletion completed in 8.200607967s

• [SLOW TEST:116.263 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:00:20.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 14 12:00:21.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:00:31.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-w86r9" for this suite.
Feb 14 12:01:25.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:01:25.499: INFO: namespace: e2e-tests-pods-w86r9, resource: bindings, ignored listing per whitelist
Feb 14 12:01:25.530: INFO: namespace e2e-tests-pods-w86r9 deletion completed in 54.324897883s

• [SLOW TEST:64.660 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:01:25.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 14 12:01:26.188: INFO: Number of nodes with available pods: 0
Feb 14 12:01:26.188: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:01:27.224: INFO: Number of nodes with available pods: 0
Feb 14 12:01:27.225: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:01:28.547: INFO: Number of nodes with available pods: 0
Feb 14 12:01:28.547: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:01:29.210: INFO: Number of nodes with available pods: 0
Feb 14 12:01:29.210: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:01:30.232: INFO: Number of nodes with available pods: 0
Feb 14 12:01:30.232: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:01:31.227: INFO: Number of nodes with available pods: 0
Feb 14 12:01:31.227: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:01:32.979: INFO: Number of nodes with available pods: 0
Feb 14 12:01:32.979: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:01:33.208: INFO: Number of nodes with available pods: 0
Feb 14 12:01:33.208: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:01:34.309: INFO: Number of nodes with available pods: 0
Feb 14 12:01:34.309: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:01:35.227: INFO: Number of nodes with available pods: 0
Feb 14 12:01:35.227: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:01:36.269: INFO: Number of nodes with available pods: 0
Feb 14 12:01:36.270: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:01:37.220: INFO: Number of nodes with available pods: 1
Feb 14 12:01:37.220: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 14 12:01:37.409: INFO: Number of nodes with available pods: 0
Feb 14 12:01:37.409: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:01:38.432: INFO: Number of nodes with available pods: 0
Feb 14 12:01:38.433: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:01:39.662: INFO: Number of nodes with available pods: 0
Feb 14 12:01:39.662: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:01:40.926: INFO: Number of nodes with available pods: 0
Feb 14 12:01:40.926: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:01:41.434: INFO: Number of nodes with available pods: 0
Feb 14 12:01:41.434: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:01:42.431: INFO: Number of nodes with available pods: 0
Feb 14 12:01:42.432: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:01:43.735: INFO: Number of nodes with available pods: 0
Feb 14 12:01:43.736: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:01:44.457: INFO: Number of nodes with available pods: 0
Feb 14 12:01:44.457: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:01:45.496: INFO: Number of nodes with available pods: 0
Feb 14 12:01:45.496: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:01:46.433: INFO: Number of nodes with available pods: 0
Feb 14 12:01:46.433: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:01:47.433: INFO: Number of nodes with available pods: 1
Feb 14 12:01:47.433: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-dr5z8, will wait for the garbage collector to delete the pods
Feb 14 12:01:47.535: INFO: Deleting DaemonSet.extensions daemon-set took: 35.192018ms
Feb 14 12:01:47.735: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.812424ms
Feb 14 12:02:02.835: INFO: Number of nodes with available pods: 0
Feb 14 12:02:02.835: INFO: Number of running nodes: 0, number of available pods: 0
Feb 14 12:02:02.846: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-dr5z8/daemonsets","resourceVersion":"21640779"},"items":null}

Feb 14 12:02:02.850: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-dr5z8/pods","resourceVersion":"21640779"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:02:02.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-dr5z8" for this suite.
Feb 14 12:02:10.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:02:11.047: INFO: namespace: e2e-tests-daemonsets-dr5z8, resource: bindings, ignored listing per whitelist
Feb 14 12:02:11.065: INFO: namespace e2e-tests-daemonsets-dr5z8 deletion completed in 8.199474927s

• [SLOW TEST:45.533 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:02:11.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb 14 12:02:23.988: INFO: Successfully updated pod "annotationupdated4e92255-4f21-11ea-af88-0242ac110007"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:02:26.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8dspf" for this suite.
Feb 14 12:02:48.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:02:48.413: INFO: namespace: e2e-tests-projected-8dspf, resource: bindings, ignored listing per whitelist
Feb 14 12:02:48.682: INFO: namespace e2e-tests-projected-8dspf deletion completed in 22.509758505s

• [SLOW TEST:37.617 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:02:48.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Feb 14 12:02:48.920: INFO: Waiting up to 5m0s for pod "client-containers-eb5643c7-4f21-11ea-af88-0242ac110007" in namespace "e2e-tests-containers-q8b9t" to be "success or failure"
Feb 14 12:02:49.031: INFO: Pod "client-containers-eb5643c7-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 111.143524ms
Feb 14 12:02:51.909: INFO: Pod "client-containers-eb5643c7-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.988405328s
Feb 14 12:02:53.945: INFO: Pod "client-containers-eb5643c7-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 5.024500761s
Feb 14 12:02:56.114: INFO: Pod "client-containers-eb5643c7-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.193650295s
Feb 14 12:02:58.160: INFO: Pod "client-containers-eb5643c7-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.239555045s
Feb 14 12:03:00.173: INFO: Pod "client-containers-eb5643c7-4f21-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.253074637s
STEP: Saw pod success
Feb 14 12:03:00.173: INFO: Pod "client-containers-eb5643c7-4f21-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:03:00.177: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-eb5643c7-4f21-11ea-af88-0242ac110007 container test-container: 
STEP: delete the pod
Feb 14 12:03:01.263: INFO: Waiting for pod client-containers-eb5643c7-4f21-11ea-af88-0242ac110007 to disappear
Feb 14 12:03:01.763: INFO: Pod client-containers-eb5643c7-4f21-11ea-af88-0242ac110007 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:03:01.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-q8b9t" for this suite.
Feb 14 12:03:07.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:03:07.973: INFO: namespace: e2e-tests-containers-q8b9t, resource: bindings, ignored listing per whitelist
Feb 14 12:03:08.062: INFO: namespace e2e-tests-containers-q8b9t deletion completed in 6.276091653s

• [SLOW TEST:19.380 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:03:08.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 14 12:03:08.287: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f6df910c-4f21-11ea-af88-0242ac110007" in namespace "e2e-tests-downward-api-wcx89" to be "success or failure"
Feb 14 12:03:08.316: INFO: Pod "downwardapi-volume-f6df910c-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 28.312515ms
Feb 14 12:03:10.569: INFO: Pod "downwardapi-volume-f6df910c-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.281368834s
Feb 14 12:03:12.590: INFO: Pod "downwardapi-volume-f6df910c-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30222425s
Feb 14 12:03:14.626: INFO: Pod "downwardapi-volume-f6df910c-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.338277786s
Feb 14 12:03:16.675: INFO: Pod "downwardapi-volume-f6df910c-4f21-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.3869891s
Feb 14 12:03:18.707: INFO: Pod "downwardapi-volume-f6df910c-4f21-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.419768692s
STEP: Saw pod success
Feb 14 12:03:18.707: INFO: Pod "downwardapi-volume-f6df910c-4f21-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:03:18.724: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-f6df910c-4f21-11ea-af88-0242ac110007 container client-container: 
STEP: delete the pod
Feb 14 12:03:19.323: INFO: Waiting for pod downwardapi-volume-f6df910c-4f21-11ea-af88-0242ac110007 to disappear
Feb 14 12:03:19.721: INFO: Pod downwardapi-volume-f6df910c-4f21-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:03:19.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-wcx89" for this suite.
Feb 14 12:03:25.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:03:26.008: INFO: namespace: e2e-tests-downward-api-wcx89, resource: bindings, ignored listing per whitelist
Feb 14 12:03:26.062: INFO: namespace e2e-tests-downward-api-wcx89 deletion completed in 6.312452449s

• [SLOW TEST:17.999 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:03:26.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 14 12:03:26.429: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 14 12:03:31.447: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:03:34.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-pm9mv" for this suite.
Feb 14 12:03:42.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:03:42.850: INFO: namespace: e2e-tests-replication-controller-pm9mv, resource: bindings, ignored listing per whitelist
Feb 14 12:03:43.151: INFO: namespace e2e-tests-replication-controller-pm9mv deletion completed in 9.135122234s

• [SLOW TEST:17.089 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:03:43.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 14 12:03:44.216: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0c4a2717-4f22-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-64r82" to be "success or failure"
Feb 14 12:03:45.573: INFO: Pod "downwardapi-volume-0c4a2717-4f22-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 1.35766253s
Feb 14 12:03:47.605: INFO: Pod "downwardapi-volume-0c4a2717-4f22-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.389645996s
Feb 14 12:03:49.619: INFO: Pod "downwardapi-volume-0c4a2717-4f22-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 5.403371751s
Feb 14 12:03:51.631: INFO: Pod "downwardapi-volume-0c4a2717-4f22-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.414963264s
Feb 14 12:03:53.966: INFO: Pod "downwardapi-volume-0c4a2717-4f22-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.750329142s
Feb 14 12:03:55.979: INFO: Pod "downwardapi-volume-0c4a2717-4f22-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.763533544s
Feb 14 12:03:58.001: INFO: Pod "downwardapi-volume-0c4a2717-4f22-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.785145657s
STEP: Saw pod success
Feb 14 12:03:58.001: INFO: Pod "downwardapi-volume-0c4a2717-4f22-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:03:58.008: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-0c4a2717-4f22-11ea-af88-0242ac110007 container client-container: 
STEP: delete the pod
Feb 14 12:03:58.184: INFO: Waiting for pod downwardapi-volume-0c4a2717-4f22-11ea-af88-0242ac110007 to disappear
Feb 14 12:03:58.197: INFO: Pod downwardapi-volume-0c4a2717-4f22-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:03:58.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-64r82" for this suite.
Feb 14 12:04:06.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:04:06.336: INFO: namespace: e2e-tests-projected-64r82, resource: bindings, ignored listing per whitelist
Feb 14 12:04:06.472: INFO: namespace e2e-tests-projected-64r82 deletion completed in 8.268618733s

• [SLOW TEST:23.320 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:04:06.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 14 12:04:06.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:04:17.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-t7htd" for this suite.
Feb 14 12:05:13.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:05:13.350: INFO: namespace: e2e-tests-pods-t7htd, resource: bindings, ignored listing per whitelist
Feb 14 12:05:13.558: INFO: namespace e2e-tests-pods-t7htd deletion completed in 56.34336745s

• [SLOW TEST:67.085 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:05:13.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 14 12:05:13.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-m8vpd'
Feb 14 12:05:15.841: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 14 12:05:15.842: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb 14 12:05:15.919: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb 14 12:05:15.963: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb 14 12:05:15.999: INFO: scanned /root for discovery docs: 
Feb 14 12:05:15.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-m8vpd'
Feb 14 12:05:43.265: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 14 12:05:43.265: INFO: stdout: "Created e2e-test-nginx-rc-41043374edfe1fab6b11d50d8b72895a\nScaling up e2e-test-nginx-rc-41043374edfe1fab6b11d50d8b72895a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-41043374edfe1fab6b11d50d8b72895a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-41043374edfe1fab6b11d50d8b72895a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb 14 12:05:43.265: INFO: stdout: "Created e2e-test-nginx-rc-41043374edfe1fab6b11d50d8b72895a\nScaling up e2e-test-nginx-rc-41043374edfe1fab6b11d50d8b72895a from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-41043374edfe1fab6b11d50d8b72895a up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-41043374edfe1fab6b11d50d8b72895a to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb 14 12:05:43.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-m8vpd'
Feb 14 12:05:43.485: INFO: stderr: ""
Feb 14 12:05:43.486: INFO: stdout: "e2e-test-nginx-rc-41043374edfe1fab6b11d50d8b72895a-7lrsz "
Feb 14 12:05:43.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-41043374edfe1fab6b11d50d8b72895a-7lrsz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m8vpd'
Feb 14 12:05:43.650: INFO: stderr: ""
Feb 14 12:05:43.650: INFO: stdout: "true"
Feb 14 12:05:43.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-41043374edfe1fab6b11d50d8b72895a-7lrsz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-m8vpd'
Feb 14 12:05:43.802: INFO: stderr: ""
Feb 14 12:05:43.802: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb 14 12:05:43.802: INFO: e2e-test-nginx-rc-41043374edfe1fab6b11d50d8b72895a-7lrsz is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Feb 14 12:05:43.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-m8vpd'
Feb 14 12:05:44.036: INFO: stderr: ""
Feb 14 12:05:44.037: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:05:44.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-m8vpd" for this suite.
Feb 14 12:06:08.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:06:08.376: INFO: namespace: e2e-tests-kubectl-m8vpd, resource: bindings, ignored listing per whitelist
Feb 14 12:06:08.383: INFO: namespace e2e-tests-kubectl-m8vpd deletion completed in 24.308140961s

• [SLOW TEST:54.825 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:06:08.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Feb 14 12:06:21.433: INFO: Successfully updated pod "annotationupdate62780a12-4f22-11ea-af88-0242ac110007"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:06:23.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-f9cq9" for this suite.
Feb 14 12:06:47.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:06:47.847: INFO: namespace: e2e-tests-downward-api-f9cq9, resource: bindings, ignored listing per whitelist
Feb 14 12:06:47.852: INFO: namespace e2e-tests-downward-api-f9cq9 deletion completed in 24.186222928s

• [SLOW TEST:39.468 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:06:47.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 14 12:06:47.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-mctvr'
Feb 14 12:06:48.166: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 14 12:06:48.166: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Feb 14 12:06:50.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-mctvr'
Feb 14 12:06:50.666: INFO: stderr: ""
Feb 14 12:06:50.667: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:06:50.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mctvr" for this suite.
Feb 14 12:06:56.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:06:57.134: INFO: namespace: e2e-tests-kubectl-mctvr, resource: bindings, ignored listing per whitelist
Feb 14 12:06:57.172: INFO: namespace e2e-tests-kubectl-mctvr deletion completed in 6.494507104s

• [SLOW TEST:9.320 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:06:57.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb 14 12:06:57.713: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-md5ph,SelfLink:/api/v1/namespaces/e2e-tests-watch-md5ph/configmaps/e2e-watch-test-resource-version,UID:7f7f5b06-4f22-11ea-a994-fa163e34d433,ResourceVersion:21641450,Generation:0,CreationTimestamp:2020-02-14 12:06:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 14 12:06:57.713: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-md5ph,SelfLink:/api/v1/namespaces/e2e-tests-watch-md5ph/configmaps/e2e-watch-test-resource-version,UID:7f7f5b06-4f22-11ea-a994-fa163e34d433,ResourceVersion:21641451,Generation:0,CreationTimestamp:2020-02-14 12:06:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:06:57.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-md5ph" for this suite.
Feb 14 12:07:03.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:07:03.948: INFO: namespace: e2e-tests-watch-md5ph, resource: bindings, ignored listing per whitelist
Feb 14 12:07:04.058: INFO: namespace e2e-tests-watch-md5ph deletion completed in 6.317316636s

• [SLOW TEST:6.886 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:07:04.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 14 12:07:04.329: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 14 12:07:04.340: INFO: Waiting for terminating namespaces to be deleted...
Feb 14 12:07:04.344: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 14 12:07:04.362: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 14 12:07:04.362: INFO: 	Container coredns ready: true, restart count 0
Feb 14 12:07:04.362: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 14 12:07:04.362: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 14 12:07:04.362: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 14 12:07:04.362: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 14 12:07:04.362: INFO: 	Container coredns ready: true, restart count 0
Feb 14 12:07:04.362: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 14 12:07:04.362: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 14 12:07:04.362: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 14 12:07:04.362: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 14 12:07:04.362: INFO: 	Container weave ready: true, restart count 0
Feb 14 12:07:04.362: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f3436bd3abcb03], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:07:05.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-tbfss" for this suite.
Feb 14 12:07:11.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:07:11.734: INFO: namespace: e2e-tests-sched-pred-tbfss, resource: bindings, ignored listing per whitelist
Feb 14 12:07:11.747: INFO: namespace e2e-tests-sched-pred-tbfss deletion completed in 6.202304344s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.687 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:07:11.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 14 12:07:11.963: INFO: Waiting up to 5m0s for pod "downwardapi-volume-881de795-4f22-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-756kc" to be "success or failure"
Feb 14 12:07:11.982: INFO: Pod "downwardapi-volume-881de795-4f22-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 18.561897ms
Feb 14 12:07:14.112: INFO: Pod "downwardapi-volume-881de795-4f22-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148729267s
Feb 14 12:07:16.124: INFO: Pod "downwardapi-volume-881de795-4f22-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161385005s
Feb 14 12:07:18.883: INFO: Pod "downwardapi-volume-881de795-4f22-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.920351841s
Feb 14 12:07:20.900: INFO: Pod "downwardapi-volume-881de795-4f22-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.936855857s
Feb 14 12:07:22.923: INFO: Pod "downwardapi-volume-881de795-4f22-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.959823506s
STEP: Saw pod success
Feb 14 12:07:22.923: INFO: Pod "downwardapi-volume-881de795-4f22-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:07:22.932: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-881de795-4f22-11ea-af88-0242ac110007 container client-container: 
STEP: delete the pod
Feb 14 12:07:23.252: INFO: Waiting for pod downwardapi-volume-881de795-4f22-11ea-af88-0242ac110007 to disappear
Feb 14 12:07:23.259: INFO: Pod downwardapi-volume-881de795-4f22-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:07:23.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-756kc" for this suite.
Feb 14 12:07:29.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:07:29.494: INFO: namespace: e2e-tests-projected-756kc, resource: bindings, ignored listing per whitelist
Feb 14 12:07:29.561: INFO: namespace e2e-tests-projected-756kc deletion completed in 6.29615888s

• [SLOW TEST:17.814 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:07:29.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0214 12:07:33.259095       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 14 12:07:33.259: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:07:33.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-cdbnm" for this suite.
Feb 14 12:07:40.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:07:40.316: INFO: namespace: e2e-tests-gc-cdbnm, resource: bindings, ignored listing per whitelist
Feb 14 12:07:40.377: INFO: namespace e2e-tests-gc-cdbnm deletion completed in 6.537081645s

• [SLOW TEST:10.815 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:07:40.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-rr8w4
Feb 14 12:07:50.688: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-rr8w4
STEP: checking the pod's current state and verifying that restartCount is present
Feb 14 12:07:50.697: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:11:51.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-rr8w4" for this suite.
Feb 14 12:11:59.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:11:59.749: INFO: namespace: e2e-tests-container-probe-rr8w4, resource: bindings, ignored listing per whitelist
Feb 14 12:11:59.847: INFO: namespace e2e-tests-container-probe-rr8w4 deletion completed in 8.246489264s

• [SLOW TEST:259.470 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:11:59.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 14 12:12:12.517: INFO: Waiting up to 5m0s for pod "client-envvars-3b3eca07-4f23-11ea-af88-0242ac110007" in namespace "e2e-tests-pods-p89kt" to be "success or failure"
Feb 14 12:12:12.697: INFO: Pod "client-envvars-3b3eca07-4f23-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 180.252133ms
Feb 14 12:12:14.787: INFO: Pod "client-envvars-3b3eca07-4f23-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.269765307s
Feb 14 12:12:16.807: INFO: Pod "client-envvars-3b3eca07-4f23-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.290363545s
Feb 14 12:12:19.362: INFO: Pod "client-envvars-3b3eca07-4f23-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.844781719s
Feb 14 12:12:21.390: INFO: Pod "client-envvars-3b3eca07-4f23-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.873349957s
Feb 14 12:12:23.402: INFO: Pod "client-envvars-3b3eca07-4f23-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.885023107s
STEP: Saw pod success
Feb 14 12:12:23.402: INFO: Pod "client-envvars-3b3eca07-4f23-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:12:23.408: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-3b3eca07-4f23-11ea-af88-0242ac110007 container env3cont: 
STEP: delete the pod
Feb 14 12:12:23.802: INFO: Waiting for pod client-envvars-3b3eca07-4f23-11ea-af88-0242ac110007 to disappear
Feb 14 12:12:23.810: INFO: Pod client-envvars-3b3eca07-4f23-11ea-af88-0242ac110007 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:12:23.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-p89kt" for this suite.
Feb 14 12:13:17.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:13:17.961: INFO: namespace: e2e-tests-pods-p89kt, resource: bindings, ignored listing per whitelist
Feb 14 12:13:17.979: INFO: namespace e2e-tests-pods-p89kt deletion completed in 54.163187512s

• [SLOW TEST:78.131 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:13:17.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 14 12:13:18.186: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb 14 12:13:23.987: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 14 12:13:28.007: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb 14 12:13:30.043: INFO: Creating deployment "test-rollover-deployment"
Feb 14 12:13:30.071: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb 14 12:13:32.092: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb 14 12:13:32.110: INFO: Ensure that both replica sets have 1 created replica
Feb 14 12:13:32.119: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb 14 12:13:32.133: INFO: Updating deployment test-rollover-deployment
Feb 14 12:13:32.133: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb 14 12:13:34.169: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb 14 12:13:34.191: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb 14 12:13:34.237: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 12:13:34.238: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279213, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 12:13:36.710: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 12:13:36.711: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279213, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 12:13:38.277: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 12:13:38.277: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279213, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 12:13:40.258: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 12:13:40.258: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279213, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 12:13:42.316: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 12:13:42.316: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279213, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 12:13:44.274: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 12:13:44.275: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279213, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 12:13:46.262: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 12:13:46.262: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279224, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 12:13:48.278: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 12:13:48.279: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279224, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 12:13:50.278: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 12:13:50.278: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279224, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 12:13:52.266: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 12:13:52.267: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279224, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 12:13:54.259: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 12:13:54.259: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279224, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717279210, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 12:13:56.313: INFO: 
Feb 14 12:13:56.313: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 14 12:13:56.419: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-9qjtc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9qjtc/deployments/test-rollover-deployment,UID:697ce58b-4f23-11ea-a994-fa163e34d433,ResourceVersion:21642163,Generation:2,CreationTimestamp:2020-02-14 12:13:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-14 12:13:30 +0000 UTC 2020-02-14 12:13:30 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-14 12:13:54 +0000 UTC 2020-02-14 12:13:30 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 14 12:13:56.434: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-9qjtc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9qjtc/replicasets/test-rollover-deployment-5b8479fdb6,UID:6abc06ed-4f23-11ea-a994-fa163e34d433,ResourceVersion:21642154,Generation:2,CreationTimestamp:2020-02-14 12:13:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 697ce58b-4f23-11ea-a994-fa163e34d433 0xc002837d67 0xc002837d68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 14 12:13:56.434: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb 14 12:13:56.434: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-9qjtc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9qjtc/replicasets/test-rollover-controller,UID:625d1b5d-4f23-11ea-a994-fa163e34d433,ResourceVersion:21642162,Generation:2,CreationTimestamp:2020-02-14 12:13:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 697ce58b-4f23-11ea-a994-fa163e34d433 0xc002837bbf 0xc002837bd0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 14 12:13:56.435: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-9qjtc,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9qjtc/replicasets/test-rollover-deployment-58494b7559,UID:6984802c-4f23-11ea-a994-fa163e34d433,ResourceVersion:21642114,Generation:2,CreationTimestamp:2020-02-14 12:13:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 697ce58b-4f23-11ea-a994-fa163e34d433 0xc002837c97 0xc002837c98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 14 12:13:56.451: INFO: Pod "test-rollover-deployment-5b8479fdb6-rd48h" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-rd48h,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-9qjtc,SelfLink:/api/v1/namespaces/e2e-tests-deployment-9qjtc/pods/test-rollover-deployment-5b8479fdb6-rd48h,UID:6b0d552a-4f23-11ea-a994-fa163e34d433,ResourceVersion:21642139,Generation:0,CreationTimestamp:2020-02-14 12:13:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 6abc06ed-4f23-11ea-a994-fa163e34d433 0xc001c7cab7 0xc001c7cab8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-22pq8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-22pq8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-22pq8 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c7cb90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c7cbb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:13:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:13:44 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:13:44 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:13:32 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-14 12:13:32 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-14 12:13:42 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://33fbc081e5d87d29d460650f5d866e1f63e451b4d6118fb3d3f8b6b7a4905f1a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:13:56.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-9qjtc" for this suite.
Feb 14 12:14:06.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:14:06.842: INFO: namespace: e2e-tests-deployment-9qjtc, resource: bindings, ignored listing per whitelist
Feb 14 12:14:06.848: INFO: namespace e2e-tests-deployment-9qjtc deletion completed in 10.382993606s

• [SLOW TEST:48.869 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:14:06.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-7f88d3d0-4f23-11ea-af88-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 14 12:14:07.049: INFO: Waiting up to 5m0s for pod "pod-configmaps-7f89bdce-4f23-11ea-af88-0242ac110007" in namespace "e2e-tests-configmap-vh5z7" to be "success or failure"
Feb 14 12:14:07.057: INFO: Pod "pod-configmaps-7f89bdce-4f23-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107503ms
Feb 14 12:14:09.070: INFO: Pod "pod-configmaps-7f89bdce-4f23-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020817087s
Feb 14 12:14:11.080: INFO: Pod "pod-configmaps-7f89bdce-4f23-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031221012s
Feb 14 12:14:13.144: INFO: Pod "pod-configmaps-7f89bdce-4f23-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09525157s
Feb 14 12:14:15.157: INFO: Pod "pod-configmaps-7f89bdce-4f23-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.108233793s
Feb 14 12:14:17.196: INFO: Pod "pod-configmaps-7f89bdce-4f23-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.147332587s
STEP: Saw pod success
Feb 14 12:14:17.196: INFO: Pod "pod-configmaps-7f89bdce-4f23-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:14:17.201: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-7f89bdce-4f23-11ea-af88-0242ac110007 container configmap-volume-test: 
STEP: delete the pod
Feb 14 12:14:17.470: INFO: Waiting for pod pod-configmaps-7f89bdce-4f23-11ea-af88-0242ac110007 to disappear
Feb 14 12:14:17.484: INFO: Pod pod-configmaps-7f89bdce-4f23-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:14:17.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-vh5z7" for this suite.
Feb 14 12:14:25.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:14:25.746: INFO: namespace: e2e-tests-configmap-vh5z7, resource: bindings, ignored listing per whitelist
Feb 14 12:14:25.789: INFO: namespace e2e-tests-configmap-vh5z7 deletion completed in 8.294371627s

• [SLOW TEST:18.941 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:14:25.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-8adeab60-4f23-11ea-af88-0242ac110007
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-8adeab60-4f23-11ea-af88-0242ac110007
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:14:38.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-sbpwc" for this suite.
Feb 14 12:15:18.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:15:18.600: INFO: namespace: e2e-tests-projected-sbpwc, resource: bindings, ignored listing per whitelist
Feb 14 12:15:18.728: INFO: namespace e2e-tests-projected-sbpwc deletion completed in 40.249142834s

• [SLOW TEST:52.938 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:15:18.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 14 12:15:26.505: INFO: 10 pods remaining
Feb 14 12:15:26.505: INFO: 10 pods has nil DeletionTimestamp
Feb 14 12:15:26.505: INFO: 
Feb 14 12:15:27.206: INFO: 10 pods remaining
Feb 14 12:15:27.206: INFO: 10 pods has nil DeletionTimestamp
Feb 14 12:15:27.206: INFO: 
Feb 14 12:15:28.826: INFO: 10 pods remaining
Feb 14 12:15:28.826: INFO: 0 pods has nil DeletionTimestamp
Feb 14 12:15:28.826: INFO: 
STEP: Gathering metrics
W0214 12:15:29.629023       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 14 12:15:29.629: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:15:29.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-44nc4" for this suite.
Feb 14 12:15:46.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:15:46.757: INFO: namespace: e2e-tests-gc-44nc4, resource: bindings, ignored listing per whitelist
Feb 14 12:15:46.892: INFO: namespace e2e-tests-gc-44nc4 deletion completed in 17.257946475s

• [SLOW TEST:28.164 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:15:46.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Feb 14 12:15:47.093: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 14 12:15:47.106: INFO: Waiting for terminating namespaces to be deleted...
Feb 14 12:15:47.110: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Feb 14 12:15:47.131: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 14 12:15:47.131: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 14 12:15:47.131: INFO: 	Container coredns ready: true, restart count 0
Feb 14 12:15:47.131: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Feb 14 12:15:47.131: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 14 12:15:47.131: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 14 12:15:47.131: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Feb 14 12:15:47.131: INFO: 	Container weave ready: true, restart count 0
Feb 14 12:15:47.131: INFO: 	Container weave-npc ready: true, restart count 0
Feb 14 12:15:47.131: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Feb 14 12:15:47.131: INFO: 	Container coredns ready: true, restart count 0
Feb 14 12:15:47.131: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Feb 14 12:15:47.131: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-c13d68b8-4f23-11ea-af88-0242ac110007 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-c13d68b8-4f23-11ea-af88-0242ac110007 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-c13d68b8-4f23-11ea-af88-0242ac110007
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:16:09.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-g4b8x" for this suite.
Feb 14 12:16:23.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:16:23.784: INFO: namespace: e2e-tests-sched-pred-g4b8x, resource: bindings, ignored listing per whitelist
Feb 14 12:16:23.813: INFO: namespace e2e-tests-sched-pred-g4b8x deletion completed in 14.228640795s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:36.920 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:16:23.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-zp8jk
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zp8jk to expose endpoints map[]
Feb 14 12:16:24.250: INFO: Get endpoints failed (5.183872ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb 14 12:16:25.263: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zp8jk exposes endpoints map[] (1.018388427s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-zp8jk
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zp8jk to expose endpoints map[pod1:[80]]
Feb 14 12:16:30.694: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (5.410875651s elapsed, will retry)
Feb 14 12:16:35.869: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zp8jk exposes endpoints map[pod1:[80]] (10.585666303s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-zp8jk
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zp8jk to expose endpoints map[pod1:[80] pod2:[80]]
Feb 14 12:16:41.290: INFO: Unexpected endpoints: found map[d1ee7fd7-4f23-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (5.410942639s elapsed, will retry)
Feb 14 12:16:44.379: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zp8jk exposes endpoints map[pod1:[80] pod2:[80]] (8.499501674s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-zp8jk
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zp8jk to expose endpoints map[pod2:[80]]
Feb 14 12:16:45.696: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zp8jk exposes endpoints map[pod2:[80]] (1.297581765s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-zp8jk
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-zp8jk to expose endpoints map[]
Feb 14 12:16:46.959: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-zp8jk exposes endpoints map[] (1.234945056s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:16:48.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-zp8jk" for this suite.
Feb 14 12:17:12.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:17:12.932: INFO: namespace: e2e-tests-services-zp8jk, resource: bindings, ignored listing per whitelist
Feb 14 12:17:13.144: INFO: namespace e2e-tests-services-zp8jk deletion completed in 24.468365625s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:49.328 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:17:13.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-ee974f49-4f23-11ea-af88-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 14 12:17:13.385: INFO: Waiting up to 5m0s for pod "pod-configmaps-ee98132f-4f23-11ea-af88-0242ac110007" in namespace "e2e-tests-configmap-gfxqm" to be "success or failure"
Feb 14 12:17:13.430: INFO: Pod "pod-configmaps-ee98132f-4f23-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 45.242268ms
Feb 14 12:17:16.065: INFO: Pod "pod-configmaps-ee98132f-4f23-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.68063671s
Feb 14 12:17:18.090: INFO: Pod "pod-configmaps-ee98132f-4f23-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.705024722s
Feb 14 12:17:20.467: INFO: Pod "pod-configmaps-ee98132f-4f23-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.081981229s
Feb 14 12:17:22.519: INFO: Pod "pod-configmaps-ee98132f-4f23-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.134437021s
Feb 14 12:17:24.902: INFO: Pod "pod-configmaps-ee98132f-4f23-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.517315671s
Feb 14 12:17:26.932: INFO: Pod "pod-configmaps-ee98132f-4f23-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.547740078s
STEP: Saw pod success
Feb 14 12:17:26.933: INFO: Pod "pod-configmaps-ee98132f-4f23-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:17:26.942: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-ee98132f-4f23-11ea-af88-0242ac110007 container configmap-volume-test: 
STEP: delete the pod
Feb 14 12:17:27.967: INFO: Waiting for pod pod-configmaps-ee98132f-4f23-11ea-af88-0242ac110007 to disappear
Feb 14 12:17:28.001: INFO: Pod pod-configmaps-ee98132f-4f23-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:17:28.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-gfxqm" for this suite.
Feb 14 12:17:34.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:17:34.291: INFO: namespace: e2e-tests-configmap-gfxqm, resource: bindings, ignored listing per whitelist
Feb 14 12:17:34.342: INFO: namespace e2e-tests-configmap-gfxqm deletion completed in 6.264424118s

• [SLOW TEST:21.198 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:17:34.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 14 12:17:34.649: INFO: Waiting up to 5m0s for pod "downward-api-fb456053-4f23-11ea-af88-0242ac110007" in namespace "e2e-tests-downward-api-6rg75" to be "success or failure"
Feb 14 12:17:34.810: INFO: Pod "downward-api-fb456053-4f23-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 160.699267ms
Feb 14 12:17:36.819: INFO: Pod "downward-api-fb456053-4f23-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169875893s
Feb 14 12:17:38.839: INFO: Pod "downward-api-fb456053-4f23-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190052911s
Feb 14 12:17:40.861: INFO: Pod "downward-api-fb456053-4f23-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212399355s
Feb 14 12:17:42.918: INFO: Pod "downward-api-fb456053-4f23-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.268791213s
Feb 14 12:17:44.930: INFO: Pod "downward-api-fb456053-4f23-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.281435444s
STEP: Saw pod success
Feb 14 12:17:44.931: INFO: Pod "downward-api-fb456053-4f23-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:17:44.934: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-fb456053-4f23-11ea-af88-0242ac110007 container dapi-container: 
STEP: delete the pod
Feb 14 12:17:44.998: INFO: Waiting for pod downward-api-fb456053-4f23-11ea-af88-0242ac110007 to disappear
Feb 14 12:17:45.007: INFO: Pod downward-api-fb456053-4f23-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:17:45.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6rg75" for this suite.
Feb 14 12:17:51.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:17:51.210: INFO: namespace: e2e-tests-downward-api-6rg75, resource: bindings, ignored listing per whitelist
Feb 14 12:17:51.210: INFO: namespace e2e-tests-downward-api-6rg75 deletion completed in 6.15196073s

• [SLOW TEST:16.866 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:17:51.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-8lkg
STEP: Creating a pod to test atomic-volume-subpath
Feb 14 12:17:51.410: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-8lkg" in namespace "e2e-tests-subpath-9q8zw" to be "success or failure"
Feb 14 12:17:51.535: INFO: Pod "pod-subpath-test-projected-8lkg": Phase="Pending", Reason="", readiness=false. Elapsed: 124.850487ms
Feb 14 12:17:53.549: INFO: Pod "pod-subpath-test-projected-8lkg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139165991s
Feb 14 12:17:55.566: INFO: Pod "pod-subpath-test-projected-8lkg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155594506s
Feb 14 12:17:57.589: INFO: Pod "pod-subpath-test-projected-8lkg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.178948105s
Feb 14 12:17:59.818: INFO: Pod "pod-subpath-test-projected-8lkg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.407844369s
Feb 14 12:18:01.845: INFO: Pod "pod-subpath-test-projected-8lkg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.435251467s
Feb 14 12:18:03.879: INFO: Pod "pod-subpath-test-projected-8lkg": Phase="Pending", Reason="", readiness=false. Elapsed: 12.468971144s
Feb 14 12:18:05.906: INFO: Pod "pod-subpath-test-projected-8lkg": Phase="Pending", Reason="", readiness=false. Elapsed: 14.49537583s
Feb 14 12:18:07.921: INFO: Pod "pod-subpath-test-projected-8lkg": Phase="Running", Reason="", readiness=false. Elapsed: 16.510534015s
Feb 14 12:18:09.949: INFO: Pod "pod-subpath-test-projected-8lkg": Phase="Running", Reason="", readiness=false. Elapsed: 18.538803533s
Feb 14 12:18:11.985: INFO: Pod "pod-subpath-test-projected-8lkg": Phase="Running", Reason="", readiness=false. Elapsed: 20.574881811s
Feb 14 12:18:14.010: INFO: Pod "pod-subpath-test-projected-8lkg": Phase="Running", Reason="", readiness=false. Elapsed: 22.599634219s
Feb 14 12:18:16.028: INFO: Pod "pod-subpath-test-projected-8lkg": Phase="Running", Reason="", readiness=false. Elapsed: 24.617721751s
Feb 14 12:18:18.051: INFO: Pod "pod-subpath-test-projected-8lkg": Phase="Running", Reason="", readiness=false. Elapsed: 26.640962375s
Feb 14 12:18:20.069: INFO: Pod "pod-subpath-test-projected-8lkg": Phase="Running", Reason="", readiness=false. Elapsed: 28.659034798s
Feb 14 12:18:22.089: INFO: Pod "pod-subpath-test-projected-8lkg": Phase="Running", Reason="", readiness=false. Elapsed: 30.678882111s
Feb 14 12:18:24.341: INFO: Pod "pod-subpath-test-projected-8lkg": Phase="Running", Reason="", readiness=false. Elapsed: 32.93086483s
Feb 14 12:18:26.363: INFO: Pod "pod-subpath-test-projected-8lkg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.952779289s
STEP: Saw pod success
Feb 14 12:18:26.363: INFO: Pod "pod-subpath-test-projected-8lkg" satisfied condition "success or failure"
Feb 14 12:18:26.384: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-8lkg container test-container-subpath-projected-8lkg: 
STEP: delete the pod
Feb 14 12:18:27.752: INFO: Waiting for pod pod-subpath-test-projected-8lkg to disappear
Feb 14 12:18:27.762: INFO: Pod pod-subpath-test-projected-8lkg no longer exists
STEP: Deleting pod pod-subpath-test-projected-8lkg
Feb 14 12:18:27.762: INFO: Deleting pod "pod-subpath-test-projected-8lkg" in namespace "e2e-tests-subpath-9q8zw"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:18:27.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-9q8zw" for this suite.
Feb 14 12:18:35.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:18:35.892: INFO: namespace: e2e-tests-subpath-9q8zw, resource: bindings, ignored listing per whitelist
Feb 14 12:18:35.973: INFO: namespace e2e-tests-subpath-9q8zw deletion completed in 8.191180551s

• [SLOW TEST:44.763 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:18:35.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:19:32.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-gxzdj" for this suite.
Feb 14 12:19:44.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:19:44.638: INFO: namespace: e2e-tests-container-runtime-gxzdj, resource: bindings, ignored listing per whitelist
Feb 14 12:19:44.719: INFO: namespace e2e-tests-container-runtime-gxzdj deletion completed in 12.258681442s

• [SLOW TEST:68.746 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:19:44.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 14 12:19:44.997: INFO: Waiting up to 5m0s for pod "downward-api-48f6dbe4-4f24-11ea-af88-0242ac110007" in namespace "e2e-tests-downward-api-rjwgc" to be "success or failure"
Feb 14 12:19:45.029: INFO: Pod "downward-api-48f6dbe4-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 31.611391ms
Feb 14 12:19:47.051: INFO: Pod "downward-api-48f6dbe4-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053862137s
Feb 14 12:19:49.072: INFO: Pod "downward-api-48f6dbe4-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074734251s
Feb 14 12:19:51.106: INFO: Pod "downward-api-48f6dbe4-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108593179s
Feb 14 12:19:53.127: INFO: Pod "downward-api-48f6dbe4-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.129436993s
Feb 14 12:19:55.146: INFO: Pod "downward-api-48f6dbe4-4f24-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.14902176s
STEP: Saw pod success
Feb 14 12:19:55.146: INFO: Pod "downward-api-48f6dbe4-4f24-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:19:55.214: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-48f6dbe4-4f24-11ea-af88-0242ac110007 container dapi-container: 
STEP: delete the pod
Feb 14 12:19:56.059: INFO: Waiting for pod downward-api-48f6dbe4-4f24-11ea-af88-0242ac110007 to disappear
Feb 14 12:19:56.352: INFO: Pod downward-api-48f6dbe4-4f24-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:19:56.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-rjwgc" for this suite.
Feb 14 12:20:02.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:20:02.785: INFO: namespace: e2e-tests-downward-api-rjwgc, resource: bindings, ignored listing per whitelist
Feb 14 12:20:02.811: INFO: namespace e2e-tests-downward-api-rjwgc deletion completed in 6.428238613s

• [SLOW TEST:18.091 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:20:02.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-9lb5
STEP: Creating a pod to test atomic-volume-subpath
Feb 14 12:20:03.020: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-9lb5" in namespace "e2e-tests-subpath-vtlqc" to be "success or failure"
Feb 14 12:20:03.047: INFO: Pod "pod-subpath-test-secret-9lb5": Phase="Pending", Reason="", readiness=false. Elapsed: 27.110179ms
Feb 14 12:20:05.128: INFO: Pod "pod-subpath-test-secret-9lb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108323293s
Feb 14 12:20:07.328: INFO: Pod "pod-subpath-test-secret-9lb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307719004s
Feb 14 12:20:09.430: INFO: Pod "pod-subpath-test-secret-9lb5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.410197606s
Feb 14 12:20:11.445: INFO: Pod "pod-subpath-test-secret-9lb5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.424690029s
Feb 14 12:20:13.459: INFO: Pod "pod-subpath-test-secret-9lb5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.43909628s
Feb 14 12:20:15.652: INFO: Pod "pod-subpath-test-secret-9lb5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.631656331s
Feb 14 12:20:17.753: INFO: Pod "pod-subpath-test-secret-9lb5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.732975376s
Feb 14 12:20:19.772: INFO: Pod "pod-subpath-test-secret-9lb5": Phase="Running", Reason="", readiness=false. Elapsed: 16.751714928s
Feb 14 12:20:21.784: INFO: Pod "pod-subpath-test-secret-9lb5": Phase="Running", Reason="", readiness=false. Elapsed: 18.76385983s
Feb 14 12:20:23.805: INFO: Pod "pod-subpath-test-secret-9lb5": Phase="Running", Reason="", readiness=false. Elapsed: 20.784972458s
Feb 14 12:20:25.822: INFO: Pod "pod-subpath-test-secret-9lb5": Phase="Running", Reason="", readiness=false. Elapsed: 22.802379506s
Feb 14 12:20:27.837: INFO: Pod "pod-subpath-test-secret-9lb5": Phase="Running", Reason="", readiness=false. Elapsed: 24.81729473s
Feb 14 12:20:29.864: INFO: Pod "pod-subpath-test-secret-9lb5": Phase="Running", Reason="", readiness=false. Elapsed: 26.844122018s
Feb 14 12:20:31.895: INFO: Pod "pod-subpath-test-secret-9lb5": Phase="Running", Reason="", readiness=false. Elapsed: 28.875063977s
Feb 14 12:20:33.923: INFO: Pod "pod-subpath-test-secret-9lb5": Phase="Running", Reason="", readiness=false. Elapsed: 30.902787986s
Feb 14 12:20:35.995: INFO: Pod "pod-subpath-test-secret-9lb5": Phase="Running", Reason="", readiness=false. Elapsed: 32.975414813s
Feb 14 12:20:38.023: INFO: Pod "pod-subpath-test-secret-9lb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.00320744s
STEP: Saw pod success
Feb 14 12:20:38.023: INFO: Pod "pod-subpath-test-secret-9lb5" satisfied condition "success or failure"
Feb 14 12:20:38.034: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-9lb5 container test-container-subpath-secret-9lb5: 
STEP: delete the pod
Feb 14 12:20:39.096: INFO: Waiting for pod pod-subpath-test-secret-9lb5 to disappear
Feb 14 12:20:39.122: INFO: Pod pod-subpath-test-secret-9lb5 no longer exists
STEP: Deleting pod pod-subpath-test-secret-9lb5
Feb 14 12:20:39.122: INFO: Deleting pod "pod-subpath-test-secret-9lb5" in namespace "e2e-tests-subpath-vtlqc"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:20:39.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-vtlqc" for this suite.
Feb 14 12:20:45.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:20:45.324: INFO: namespace: e2e-tests-subpath-vtlqc, resource: bindings, ignored listing per whitelist
Feb 14 12:20:45.467: INFO: namespace e2e-tests-subpath-vtlqc deletion completed in 6.329506154s

• [SLOW TEST:42.656 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:20:45.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 14 12:20:45.666: INFO: Creating deployment "nginx-deployment"
Feb 14 12:20:45.680: INFO: Waiting for observed generation 1
Feb 14 12:20:49.699: INFO: Waiting for all required pods to come up
Feb 14 12:20:50.393: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb 14 12:21:28.429: INFO: Waiting for deployment "nginx-deployment" to complete
Feb 14 12:21:28.493: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb 14 12:21:28.517: INFO: Updating deployment nginx-deployment
Feb 14 12:21:28.517: INFO: Waiting for observed generation 2
Feb 14 12:21:33.184: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb 14 12:21:33.208: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb 14 12:21:33.221: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 14 12:21:33.346: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb 14 12:21:33.346: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb 14 12:21:33.349: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 14 12:21:33.355: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb 14 12:21:33.355: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb 14 12:21:36.043: INFO: Updating deployment nginx-deployment
Feb 14 12:21:36.043: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb 14 12:21:36.913: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb 14 12:21:39.451: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 14 12:21:40.271: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-g95r2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g95r2/deployments/nginx-deployment,UID:6d239a91-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643380,Generation:3,CreationTimestamp:2020-02-14 12:20:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-14 12:21:29 +0000 UTC 2020-02-14 12:20:45 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-02-14 12:21:37 +0000 UTC 2020-02-14 12:21:37 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb 14 12:21:40.523: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-g95r2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g95r2/replicasets/nginx-deployment-5c98f8fb5,UID:86af1b60-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643375,Generation:3,CreationTimestamp:2020-02-14 12:21:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 6d239a91-4f24-11ea-a994-fa163e34d433 0xc002662cc7 0xc002662cc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 14 12:21:40.523: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb 14 12:21:40.524: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-g95r2,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-g95r2/replicasets/nginx-deployment-85ddf47c5d,UID:6d27096a-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643371,Generation:3,CreationTimestamp:2020-02-14 12:20:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 6d239a91-4f24-11ea-a994-fa163e34d433 0xc002662d87 0xc002662d88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb 14 12:21:40.761: INFO: Pod "nginx-deployment-5c98f8fb5-6mx2j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-6mx2j,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-5c98f8fb5-6mx2j,UID:8d83e6ff-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643420,Generation:0,CreationTimestamp:2020-02-14 12:21:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 86af1b60-4f24-11ea-a994-fa163e34d433 0xc00202c327 0xc00202c328}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00202c390} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00202c3b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.761: INFO: Pod "nginx-deployment-5c98f8fb5-7kjgs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-7kjgs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-5c98f8fb5-7kjgs,UID:872b1de4-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643364,Generation:0,CreationTimestamp:2020-02-14 12:21:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 86af1b60-4f24-11ea-a994-fa163e34d433 0xc00202c4a7 0xc00202c4a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00202c510} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00202c530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:29 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-14 12:21:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.761: INFO: Pod "nginx-deployment-5c98f8fb5-88k7w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-88k7w,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-5c98f8fb5-88k7w,UID:86c8c108-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643359,Generation:0,CreationTimestamp:2020-02-14 12:21:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 86af1b60-4f24-11ea-a994-fa163e34d433 0xc00202c6d7 0xc00202c6d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00202c740} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00202c760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:28 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-14 12:21:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.762: INFO: Pod "nginx-deployment-5c98f8fb5-b7pwl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-b7pwl,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-5c98f8fb5-b7pwl,UID:86c85822-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643361,Generation:0,CreationTimestamp:2020-02-14 12:21:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 86af1b60-4f24-11ea-a994-fa163e34d433 0xc00202c8f7 0xc00202c8f8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00202cbc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00202cbe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:28 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-14 12:21:29 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.762: INFO: Pod "nginx-deployment-5c98f8fb5-b82xx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-b82xx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-5c98f8fb5-b82xx,UID:8db5fe2b-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643431,Generation:0,CreationTimestamp:2020-02-14 12:21:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 86af1b60-4f24-11ea-a994-fa163e34d433 0xc00202cde7 0xc00202cde8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00202ce50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00202ce70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.763: INFO: Pod "nginx-deployment-5c98f8fb5-dcwkc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-dcwkc,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-5c98f8fb5-dcwkc,UID:8d014a6f-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643396,Generation:0,CreationTimestamp:2020-02-14 12:21:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 86af1b60-4f24-11ea-a994-fa163e34d433 0xc00202cee7 0xc00202cee8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00202cff0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00202d010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:39 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.763: INFO: Pod "nginx-deployment-5c98f8fb5-j9jlf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-j9jlf,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-5c98f8fb5-j9jlf,UID:8d83be4c-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643418,Generation:0,CreationTimestamp:2020-02-14 12:21:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 86af1b60-4f24-11ea-a994-fa163e34d433 0xc00202d087 0xc00202d088}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00202d0f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00202d110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.763: INFO: Pod "nginx-deployment-5c98f8fb5-jmjwr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-jmjwr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-5c98f8fb5-jmjwr,UID:8d839503-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643415,Generation:0,CreationTimestamp:2020-02-14 12:21:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 86af1b60-4f24-11ea-a994-fa163e34d433 0xc00202d187 0xc00202d188}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00202d1f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00202d220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.764: INFO: Pod "nginx-deployment-5c98f8fb5-kmjr4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-kmjr4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-5c98f8fb5-kmjr4,UID:8d3d1103-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643412,Generation:0,CreationTimestamp:2020-02-14 12:21:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 86af1b60-4f24-11ea-a994-fa163e34d433 0xc00202d297 0xc00202d298}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00202d300} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00202d320}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.764: INFO: Pod "nginx-deployment-5c98f8fb5-nwrv5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-nwrv5,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-5c98f8fb5-nwrv5,UID:8731044b-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643365,Generation:0,CreationTimestamp:2020-02-14 12:21:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 86af1b60-4f24-11ea-a994-fa163e34d433 0xc00202d497 0xc00202d498}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00202d550} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00202d570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:30 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:29 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-14 12:21:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.764: INFO: Pod "nginx-deployment-5c98f8fb5-plsmp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-plsmp,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-5c98f8fb5-plsmp,UID:86b64ee1-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643334,Generation:0,CreationTimestamp:2020-02-14 12:21:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 86af1b60-4f24-11ea-a994-fa163e34d433 0xc00202d647 0xc00202d648}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00202d6c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00202d7c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:28 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-14 12:21:28 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.765: INFO: Pod "nginx-deployment-5c98f8fb5-v5fgk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-v5fgk,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-5c98f8fb5-v5fgk,UID:8d84059e-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643417,Generation:0,CreationTimestamp:2020-02-14 12:21:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 86af1b60-4f24-11ea-a994-fa163e34d433 0xc00202d8a7 0xc00202d8a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00202d910} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00202d930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.765: INFO: Pod "nginx-deployment-5c98f8fb5-x4nnn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-x4nnn,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-5c98f8fb5-x4nnn,UID:8d3cbd0b-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643399,Generation:0,CreationTimestamp:2020-02-14 12:21:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 86af1b60-4f24-11ea-a994-fa163e34d433 0xc00202da07 0xc00202da08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00202da70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00202daa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:39 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.765: INFO: Pod "nginx-deployment-85ddf47c5d-2z74d" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-2z74d,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-85ddf47c5d-2z74d,UID:8d3cb906-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643403,Generation:0,CreationTimestamp:2020-02-14 12:21:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6d27096a-4f24-11ea-a994-fa163e34d433 0xc00202db17 0xc00202db18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00202db80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00202dba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:39 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.765: INFO: Pod "nginx-deployment-85ddf47c5d-4vh9p" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-4vh9p,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-85ddf47c5d-4vh9p,UID:6d42b699-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643273,Generation:0,CreationTimestamp:2020-02-14 12:20:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6d27096a-4f24-11ea-a994-fa163e34d433 0xc00202dc17 0xc00202dc18}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00202dc80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00202dca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:20:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:20 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:20 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:20:45 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-02-14 12:20:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-14 12:21:20 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0734ef0f579a2c5552d4072c5bdffc3ff3ba93111d4f414e1832fd10f7c98041}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.766: INFO: Pod "nginx-deployment-85ddf47c5d-6824s" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6824s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-85ddf47c5d-6824s,UID:8cfd6bf7-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643397,Generation:0,CreationTimestamp:2020-02-14 12:21:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6d27096a-4f24-11ea-a994-fa163e34d433 0xc00202dd67 0xc00202dd68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00202ddd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00202ddf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:39 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.766: INFO: Pod "nginx-deployment-85ddf47c5d-8mnxt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-8mnxt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-85ddf47c5d-8mnxt,UID:6d42a026-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643292,Generation:0,CreationTimestamp:2020-02-14 12:20:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6d27096a-4f24-11ea-a994-fa163e34d433 0xc00202de67 0xc00202de68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00202ded0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00202def0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:20:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:20:45 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-14 12:20:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-14 12:21:10 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8d3f629d25cda9afad9f2afef8791410a5ec105ab61a0b7623bf4a91d9bfd337}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.766: INFO: Pod "nginx-deployment-85ddf47c5d-bbdqn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bbdqn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-85ddf47c5d-bbdqn,UID:8d868bd9-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643424,Generation:0,CreationTimestamp:2020-02-14 12:21:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6d27096a-4f24-11ea-a994-fa163e34d433 0xc00202dfc7 0xc00202dfc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020aa030} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020aa050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.767: INFO: Pod "nginx-deployment-85ddf47c5d-c9pwm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-c9pwm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-85ddf47c5d-c9pwm,UID:8d3c87c3-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643398,Generation:0,CreationTimestamp:2020-02-14 12:21:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6d27096a-4f24-11ea-a994-fa163e34d433 0xc0020aa0c7 0xc0020aa0c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020aa1a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020aa1c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:39 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.767: INFO: Pod "nginx-deployment-85ddf47c5d-d8kgj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-d8kgj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-85ddf47c5d-d8kgj,UID:8d8619e3-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643416,Generation:0,CreationTimestamp:2020-02-14 12:21:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6d27096a-4f24-11ea-a994-fa163e34d433 0xc0020aa237 0xc0020aa238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020aa2a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020aa380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.768: INFO: Pod "nginx-deployment-85ddf47c5d-k668x" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-k668x,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-85ddf47c5d-k668x,UID:6d48daab-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643284,Generation:0,CreationTimestamp:2020-02-14 12:20:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6d27096a-4f24-11ea-a994-fa163e34d433 0xc0020aa407 0xc0020aa408}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020aa470} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020aa4a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:20:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:20:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-02-14 12:20:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-14 12:21:19 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://de1869d57c3abe9cd613537350f2fa3bcf0d65d412429c057f4cd4e7e216d278}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.768: INFO: Pod "nginx-deployment-85ddf47c5d-kpm4s" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kpm4s,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-85ddf47c5d-kpm4s,UID:6d6372c9-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643309,Generation:0,CreationTimestamp:2020-02-14 12:20:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6d27096a-4f24-11ea-a994-fa163e34d433 0xc0020aa857 0xc0020aa858}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020aa8c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020aa8e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:20:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:20:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-02-14 12:20:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-14 12:21:20 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fab647ee152e553d613e409cf6df4e8d164767b7b62a322292b468f3a4b88270}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.768: INFO: Pod "nginx-deployment-85ddf47c5d-kxmq6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-kxmq6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-85ddf47c5d-kxmq6,UID:8d3cc7e0-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643413,Generation:0,CreationTimestamp:2020-02-14 12:21:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6d27096a-4f24-11ea-a994-fa163e34d433 0xc0020aa9a7 0xc0020aa9a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020aabb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020aabd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.769: INFO: Pod "nginx-deployment-85ddf47c5d-m8sx6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-m8sx6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-85ddf47c5d-m8sx6,UID:6d488ede-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643295,Generation:0,CreationTimestamp:2020-02-14 12:20:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6d27096a-4f24-11ea-a994-fa163e34d433 0xc0020aacd7 0xc0020aacd8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020aad40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020aad60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:20:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:20:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-02-14 12:20:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-14 12:21:20 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://f5fa4db6e0d2f575c147a69951022297d3024ca1bca7d54fd4988b93fb2ada10}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.769: INFO: Pod "nginx-deployment-85ddf47c5d-mc9xv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mc9xv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-85ddf47c5d-mc9xv,UID:6d3e13e0-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643303,Generation:0,CreationTimestamp:2020-02-14 12:20:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6d27096a-4f24-11ea-a994-fa163e34d433 0xc0020ab807 0xc0020ab808}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020ab870} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020ab890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:20:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:20:45 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-02-14 12:20:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-14 12:21:19 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8fbf5624c5ee1f9ded70754c2b37b2ce105ec9b9eb7ccf38e645afb551c91a20}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.770: INFO: Pod "nginx-deployment-85ddf47c5d-mmf2f" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mmf2f,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-85ddf47c5d-mmf2f,UID:8c359677-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643428,Generation:0,CreationTimestamp:2020-02-14 12:21:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6d27096a-4f24-11ea-a994-fa163e34d433 0xc0020aba07 0xc0020aba08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020aba90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020abae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:39 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-02-14 12:21:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.770: INFO: Pod "nginx-deployment-85ddf47c5d-nklwz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nklwz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-85ddf47c5d-nklwz,UID:6d48e1f5-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643300,Generation:0,CreationTimestamp:2020-02-14 12:20:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6d27096a-4f24-11ea-a994-fa163e34d433 0xc0020abc27 0xc0020abc28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020abca0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020abcc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:20:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:20:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.13,StartTime:2020-02-14 12:20:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-14 12:21:20 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://cc78cd95ef21b8495251038666e661454199e83ce1a96613343639270e5185b9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.770: INFO: Pod "nginx-deployment-85ddf47c5d-rxkzh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rxkzh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-85ddf47c5d-rxkzh,UID:8d85ba9e-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643422,Generation:0,CreationTimestamp:2020-02-14 12:21:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6d27096a-4f24-11ea-a994-fa163e34d433 0xc0020abd97 0xc0020abd98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020abe00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020abe20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.770: INFO: Pod "nginx-deployment-85ddf47c5d-sbbzr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-sbbzr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-85ddf47c5d-sbbzr,UID:6d48b0d2-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643278,Generation:0,CreationTimestamp:2020-02-14 12:20:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6d27096a-4f24-11ea-a994-fa163e34d433 0xc0020abe97 0xc0020abe98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020abf00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020abf20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:20:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:20:46 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.10,StartTime:2020-02-14 12:20:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-14 12:21:20 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8e8b88c9acee64a6cdc8f95ca532dfac9b0f48202c5bce151d91288989f05ce5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.771: INFO: Pod "nginx-deployment-85ddf47c5d-v7pfn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-v7pfn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-85ddf47c5d-v7pfn,UID:8d86295a-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643423,Generation:0,CreationTimestamp:2020-02-14 12:21:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6d27096a-4f24-11ea-a994-fa163e34d433 0xc0020abfe7 0xc0020abfe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026dc050} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026dc070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.771: INFO: Pod "nginx-deployment-85ddf47c5d-vt9cx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vt9cx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-85ddf47c5d-vt9cx,UID:8cfd922e-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643388,Generation:0,CreationTimestamp:2020-02-14 12:21:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6d27096a-4f24-11ea-a994-fa163e34d433 0xc0026dc0e7 0xc0026dc0e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026dc150} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026dc170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:39 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.771: INFO: Pod "nginx-deployment-85ddf47c5d-vvpj5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vvpj5,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-85ddf47c5d-vvpj5,UID:8d3cddd1-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643411,Generation:0,CreationTimestamp:2020-02-14 12:21:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6d27096a-4f24-11ea-a994-fa163e34d433 0xc0026dc1e7 0xc0026dc1e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026dc250} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026dc270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 14 12:21:40.771: INFO: Pod "nginx-deployment-85ddf47c5d-wn478" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wn478,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-g95r2,SelfLink:/api/v1/namespaces/e2e-tests-deployment-g95r2/pods/nginx-deployment-85ddf47c5d-wn478,UID:8d86c08a-4f24-11ea-a994-fa163e34d433,ResourceVersion:21643425,Generation:0,CreationTimestamp:2020-02-14 12:21:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 6d27096a-4f24-11ea-a994-fa163e34d433 0xc0026dc2e7 0xc0026dc2e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-6nrhq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6nrhq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-6nrhq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026dc350} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026dc370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 12:21:40 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:21:40.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-g95r2" for this suite.
Feb 14 12:23:01.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:23:01.153: INFO: namespace: e2e-tests-deployment-g95r2, resource: bindings, ignored listing per whitelist
Feb 14 12:23:01.233: INFO: namespace e2e-tests-deployment-g95r2 deletion completed in 1m20.291979855s

• [SLOW TEST:135.765 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:23:01.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 14 12:23:02.655: INFO: Waiting up to 5m0s for pod "pod-be869381-4f24-11ea-af88-0242ac110007" in namespace "e2e-tests-emptydir-rwqmr" to be "success or failure"
Feb 14 12:23:02.877: INFO: Pod "pod-be869381-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 221.062732ms
Feb 14 12:23:05.122: INFO: Pod "pod-be869381-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.466102791s
Feb 14 12:23:07.148: INFO: Pod "pod-be869381-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.492165022s
Feb 14 12:23:09.164: INFO: Pod "pod-be869381-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.50860621s
Feb 14 12:23:11.177: INFO: Pod "pod-be869381-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.521354587s
Feb 14 12:23:14.324: INFO: Pod "pod-be869381-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.668406973s
Feb 14 12:23:16.525: INFO: Pod "pod-be869381-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 13.869902666s
Feb 14 12:23:18.558: INFO: Pod "pod-be869381-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.902738003s
Feb 14 12:23:20.584: INFO: Pod "pod-be869381-4f24-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.928569131s
STEP: Saw pod success
Feb 14 12:23:20.584: INFO: Pod "pod-be869381-4f24-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:23:20.590: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-be869381-4f24-11ea-af88-0242ac110007 container test-container: 
STEP: delete the pod
Feb 14 12:23:21.846: INFO: Waiting for pod pod-be869381-4f24-11ea-af88-0242ac110007 to disappear
Feb 14 12:23:22.092: INFO: Pod pod-be869381-4f24-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:23:22.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-rwqmr" for this suite.
Feb 14 12:23:28.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:23:28.284: INFO: namespace: e2e-tests-emptydir-rwqmr, resource: bindings, ignored listing per whitelist
Feb 14 12:23:28.368: INFO: namespace e2e-tests-emptydir-rwqmr deletion completed in 6.268681728s

• [SLOW TEST:27.135 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:23:28.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-ce437367-4f24-11ea-af88-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 14 12:23:28.698: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ce49da20-4f24-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-42j2v" to be "success or failure"
Feb 14 12:23:28.732: INFO: Pod "pod-projected-configmaps-ce49da20-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 34.069001ms
Feb 14 12:23:30.820: INFO: Pod "pod-projected-configmaps-ce49da20-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122081317s
Feb 14 12:23:32.895: INFO: Pod "pod-projected-configmaps-ce49da20-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196871721s
Feb 14 12:23:34.971: INFO: Pod "pod-projected-configmaps-ce49da20-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.273357985s
Feb 14 12:23:36.993: INFO: Pod "pod-projected-configmaps-ce49da20-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.294837596s
Feb 14 12:23:38.999: INFO: Pod "pod-projected-configmaps-ce49da20-4f24-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.300899182s
STEP: Saw pod success
Feb 14 12:23:38.999: INFO: Pod "pod-projected-configmaps-ce49da20-4f24-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:23:39.002: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-ce49da20-4f24-11ea-af88-0242ac110007 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 14 12:23:39.150: INFO: Waiting for pod pod-projected-configmaps-ce49da20-4f24-11ea-af88-0242ac110007 to disappear
Feb 14 12:23:39.168: INFO: Pod pod-projected-configmaps-ce49da20-4f24-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:23:39.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-42j2v" for this suite.
Feb 14 12:23:47.249: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:23:47.406: INFO: namespace: e2e-tests-projected-42j2v, resource: bindings, ignored listing per whitelist
Feb 14 12:23:47.423: INFO: namespace e2e-tests-projected-42j2v deletion completed in 8.245668723s

• [SLOW TEST:19.054 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:23:47.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-d99db13e-4f24-11ea-af88-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 14 12:23:47.694: INFO: Waiting up to 5m0s for pod "pod-configmaps-d99f1d23-4f24-11ea-af88-0242ac110007" in namespace "e2e-tests-configmap-7pzgj" to be "success or failure"
Feb 14 12:23:47.703: INFO: Pod "pod-configmaps-d99f1d23-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.715114ms
Feb 14 12:23:49.760: INFO: Pod "pod-configmaps-d99f1d23-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065730588s
Feb 14 12:23:51.789: INFO: Pod "pod-configmaps-d99f1d23-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09517697s
Feb 14 12:23:53.929: INFO: Pod "pod-configmaps-d99f1d23-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.234619415s
Feb 14 12:23:55.941: INFO: Pod "pod-configmaps-d99f1d23-4f24-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.247126469s
Feb 14 12:23:57.961: INFO: Pod "pod-configmaps-d99f1d23-4f24-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.267461523s
STEP: Saw pod success
Feb 14 12:23:57.962: INFO: Pod "pod-configmaps-d99f1d23-4f24-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:23:57.966: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d99f1d23-4f24-11ea-af88-0242ac110007 container configmap-volume-test: 
STEP: delete the pod
Feb 14 12:23:58.904: INFO: Waiting for pod pod-configmaps-d99f1d23-4f24-11ea-af88-0242ac110007 to disappear
Feb 14 12:23:58.945: INFO: Pod pod-configmaps-d99f1d23-4f24-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:23:58.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-7pzgj" for this suite.
Feb 14 12:24:05.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:24:05.185: INFO: namespace: e2e-tests-configmap-7pzgj, resource: bindings, ignored listing per whitelist
Feb 14 12:24:05.223: INFO: namespace e2e-tests-configmap-7pzgj deletion completed in 6.271611697s

• [SLOW TEST:17.800 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:24:05.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb 14 12:24:05.340: INFO: namespace e2e-tests-kubectl-ttbcq
Feb 14 12:24:05.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ttbcq'
Feb 14 12:24:07.635: INFO: stderr: ""
Feb 14 12:24:07.636: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 14 12:24:08.662: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:24:08.662: INFO: Found 0 / 1
Feb 14 12:24:10.359: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:24:10.359: INFO: Found 0 / 1
Feb 14 12:24:10.707: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:24:10.707: INFO: Found 0 / 1
Feb 14 12:24:11.656: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:24:11.656: INFO: Found 0 / 1
Feb 14 12:24:12.667: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:24:12.667: INFO: Found 0 / 1
Feb 14 12:24:13.887: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:24:13.888: INFO: Found 0 / 1
Feb 14 12:24:14.657: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:24:14.657: INFO: Found 0 / 1
Feb 14 12:24:15.694: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:24:15.694: INFO: Found 0 / 1
Feb 14 12:24:16.695: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:24:16.696: INFO: Found 0 / 1
Feb 14 12:24:17.664: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:24:17.664: INFO: Found 1 / 1
Feb 14 12:24:17.664: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 14 12:24:17.670: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:24:17.670: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 14 12:24:17.670: INFO: wait on redis-master startup in e2e-tests-kubectl-ttbcq 
Feb 14 12:24:17.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rbvlp redis-master --namespace=e2e-tests-kubectl-ttbcq'
Feb 14 12:24:17.871: INFO: stderr: ""
Feb 14 12:24:17.872: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 14 Feb 12:24:15.406 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 14 Feb 12:24:15.406 # Server started, Redis version 3.2.12\n1:M 14 Feb 12:24:15.407 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 14 Feb 12:24:15.407 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb 14 12:24:17.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-ttbcq'
Feb 14 12:24:18.227: INFO: stderr: ""
Feb 14 12:24:18.227: INFO: stdout: "service/rm2 exposed\n"
Feb 14 12:24:18.244: INFO: Service rm2 in namespace e2e-tests-kubectl-ttbcq found.
STEP: exposing service
Feb 14 12:24:20.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-ttbcq'
Feb 14 12:24:20.706: INFO: stderr: ""
Feb 14 12:24:20.707: INFO: stdout: "service/rm3 exposed\n"
Feb 14 12:24:20.724: INFO: Service rm3 in namespace e2e-tests-kubectl-ttbcq found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:24:22.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ttbcq" for this suite.
Feb 14 12:24:48.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:24:48.952: INFO: namespace: e2e-tests-kubectl-ttbcq, resource: bindings, ignored listing per whitelist
Feb 14 12:24:49.026: INFO: namespace e2e-tests-kubectl-ttbcq deletion completed in 26.25704942s

• [SLOW TEST:43.803 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:24:49.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Feb 14 12:24:49.281: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb 14 12:24:49.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zp9jf'
Feb 14 12:24:49.763: INFO: stderr: ""
Feb 14 12:24:49.763: INFO: stdout: "service/redis-slave created\n"
Feb 14 12:24:49.765: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb 14 12:24:49.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zp9jf'
Feb 14 12:24:50.268: INFO: stderr: ""
Feb 14 12:24:50.268: INFO: stdout: "service/redis-master created\n"
Feb 14 12:24:50.269: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb 14 12:24:50.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zp9jf'
Feb 14 12:24:50.816: INFO: stderr: ""
Feb 14 12:24:50.816: INFO: stdout: "service/frontend created\n"
Feb 14 12:24:50.817: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb 14 12:24:50.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zp9jf'
Feb 14 12:24:51.251: INFO: stderr: ""
Feb 14 12:24:51.251: INFO: stdout: "deployment.extensions/frontend created\n"
Feb 14 12:24:51.252: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 14 12:24:51.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zp9jf'
Feb 14 12:24:51.871: INFO: stderr: ""
Feb 14 12:24:51.871: INFO: stdout: "deployment.extensions/redis-master created\n"
Feb 14 12:24:51.872: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb 14 12:24:51.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zp9jf'
Feb 14 12:24:52.489: INFO: stderr: ""
Feb 14 12:24:52.489: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Feb 14 12:24:52.489: INFO: Waiting for all frontend pods to be Running.
Feb 14 12:25:27.543: INFO: Waiting for frontend to serve content.
Feb 14 12:25:27.780: INFO: Trying to add a new entry to the guestbook.
Feb 14 12:25:27.837: INFO: Verifying that added entry can be retrieved.
Feb 14 12:25:27.901: INFO: Failed to get response from guestbook. err: , response: {"data": ""}
STEP: using delete to clean up resources
Feb 14 12:25:33.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zp9jf'
Feb 14 12:25:33.366: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 12:25:33.367: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb 14 12:25:33.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zp9jf'
Feb 14 12:25:33.696: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 12:25:33.696: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 14 12:25:33.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zp9jf'
Feb 14 12:25:34.014: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 12:25:34.014: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 14 12:25:34.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zp9jf'
Feb 14 12:25:34.203: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 12:25:34.203: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 14 12:25:34.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zp9jf'
Feb 14 12:25:34.399: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 12:25:34.399: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 14 12:25:34.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zp9jf'
Feb 14 12:25:34.786: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 12:25:34.787: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:25:34.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zp9jf" for this suite.
Feb 14 12:26:26.900: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:26:26.975: INFO: namespace: e2e-tests-kubectl-zp9jf, resource: bindings, ignored listing per whitelist
Feb 14 12:26:27.045: INFO: namespace e2e-tests-kubectl-zp9jf deletion completed in 52.24026567s

• [SLOW TEST:98.019 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:26:27.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 14 12:26:49.387: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 14 12:26:49.497: INFO: Pod pod-with-prestop-http-hook still exists
Feb 14 12:26:51.498: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 14 12:26:51.510: INFO: Pod pod-with-prestop-http-hook still exists
Feb 14 12:26:53.498: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 14 12:26:53.518: INFO: Pod pod-with-prestop-http-hook still exists
Feb 14 12:26:55.498: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 14 12:26:55.515: INFO: Pod pod-with-prestop-http-hook still exists
Feb 14 12:26:57.498: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 14 12:26:57.516: INFO: Pod pod-with-prestop-http-hook still exists
Feb 14 12:26:59.498: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 14 12:26:59.513: INFO: Pod pod-with-prestop-http-hook still exists
Feb 14 12:27:01.498: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 14 12:27:01.537: INFO: Pod pod-with-prestop-http-hook still exists
Feb 14 12:27:03.498: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 14 12:27:03.516: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:27:03.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-x5z4q" for this suite.
Feb 14 12:27:31.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:27:31.671: INFO: namespace: e2e-tests-container-lifecycle-hook-x5z4q, resource: bindings, ignored listing per whitelist
Feb 14 12:27:31.736: INFO: namespace e2e-tests-container-lifecycle-hook-x5z4q deletion completed in 28.1845745s

• [SLOW TEST:64.690 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:27:31.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-5f794819-4f25-11ea-af88-0242ac110007
STEP: Creating a pod to test consume secrets
Feb 14 12:27:32.278: INFO: Waiting up to 5m0s for pod "pod-secrets-5f7ba7c2-4f25-11ea-af88-0242ac110007" in namespace "e2e-tests-secrets-nfmt5" to be "success or failure"
Feb 14 12:27:32.311: INFO: Pod "pod-secrets-5f7ba7c2-4f25-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 33.521383ms
Feb 14 12:27:34.663: INFO: Pod "pod-secrets-5f7ba7c2-4f25-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.384695751s
Feb 14 12:27:36.679: INFO: Pod "pod-secrets-5f7ba7c2-4f25-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.400784109s
Feb 14 12:27:39.028: INFO: Pod "pod-secrets-5f7ba7c2-4f25-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.749777859s
Feb 14 12:27:41.047: INFO: Pod "pod-secrets-5f7ba7c2-4f25-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.769272337s
Feb 14 12:27:43.066: INFO: Pod "pod-secrets-5f7ba7c2-4f25-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.788308651s
Feb 14 12:27:45.101: INFO: Pod "pod-secrets-5f7ba7c2-4f25-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.822760814s
STEP: Saw pod success
Feb 14 12:27:45.101: INFO: Pod "pod-secrets-5f7ba7c2-4f25-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:27:45.106: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-5f7ba7c2-4f25-11ea-af88-0242ac110007 container secret-volume-test: 
STEP: delete the pod
Feb 14 12:27:45.406: INFO: Waiting for pod pod-secrets-5f7ba7c2-4f25-11ea-af88-0242ac110007 to disappear
Feb 14 12:27:45.617: INFO: Pod pod-secrets-5f7ba7c2-4f25-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:27:45.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-nfmt5" for this suite.
Feb 14 12:27:51.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:27:51.754: INFO: namespace: e2e-tests-secrets-nfmt5, resource: bindings, ignored listing per whitelist
Feb 14 12:27:51.868: INFO: namespace e2e-tests-secrets-nfmt5 deletion completed in 6.235495774s

• [SLOW TEST:20.132 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:27:51.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:27:52.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-wmfp8" for this suite.
Feb 14 12:28:18.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:28:18.669: INFO: namespace: e2e-tests-pods-wmfp8, resource: bindings, ignored listing per whitelist
Feb 14 12:28:18.778: INFO: namespace e2e-tests-pods-wmfp8 deletion completed in 26.371713625s

• [SLOW TEST:26.910 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:28:18.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 14 12:28:19.243: INFO: Waiting up to 5m0s for pod "downward-api-7b6df335-4f25-11ea-af88-0242ac110007" in namespace "e2e-tests-downward-api-5bsql" to be "success or failure"
Feb 14 12:28:19.251: INFO: Pod "downward-api-7b6df335-4f25-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.964168ms
Feb 14 12:28:21.499: INFO: Pod "downward-api-7b6df335-4f25-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.256238698s
Feb 14 12:28:23.525: INFO: Pod "downward-api-7b6df335-4f25-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282424125s
Feb 14 12:28:25.699: INFO: Pod "downward-api-7b6df335-4f25-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.4561596s
Feb 14 12:28:27.719: INFO: Pod "downward-api-7b6df335-4f25-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.476077947s
Feb 14 12:28:29.740: INFO: Pod "downward-api-7b6df335-4f25-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.4972253s
STEP: Saw pod success
Feb 14 12:28:29.740: INFO: Pod "downward-api-7b6df335-4f25-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:28:29.747: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-7b6df335-4f25-11ea-af88-0242ac110007 container dapi-container: 
STEP: delete the pod
Feb 14 12:28:29.934: INFO: Waiting for pod downward-api-7b6df335-4f25-11ea-af88-0242ac110007 to disappear
Feb 14 12:28:29.944: INFO: Pod downward-api-7b6df335-4f25-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:28:29.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-5bsql" for this suite.
Feb 14 12:28:36.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:28:36.303: INFO: namespace: e2e-tests-downward-api-5bsql, resource: bindings, ignored listing per whitelist
Feb 14 12:28:36.379: INFO: namespace e2e-tests-downward-api-5bsql deletion completed in 6.4204188s

• [SLOW TEST:17.600 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:28:36.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-dl9xd
I0214 12:28:36.729016       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-dl9xd, replica count: 1
I0214 12:28:37.779813       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 12:28:38.780474       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 12:28:39.781149       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 12:28:40.781719       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 12:28:41.782303       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 12:28:42.782967       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 12:28:43.783472       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 12:28:44.783945       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 12:28:45.784832       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 12:28:46.785564       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 12:28:47.786015       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 12:28:48.786512       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 12:28:49.787017       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 14 12:28:49.946: INFO: Created: latency-svc-7nk8g
Feb 14 12:28:50.136: INFO: Got endpoints: latency-svc-7nk8g [248.788464ms]
Feb 14 12:28:50.823: INFO: Created: latency-svc-v42kf
Feb 14 12:28:50.856: INFO: Got endpoints: latency-svc-v42kf [719.299235ms]
Feb 14 12:28:50.901: INFO: Created: latency-svc-25p8t
Feb 14 12:28:51.092: INFO: Got endpoints: latency-svc-25p8t [954.783537ms]
Feb 14 12:28:51.127: INFO: Created: latency-svc-gps7h
Feb 14 12:28:51.189: INFO: Got endpoints: latency-svc-gps7h [1.051872572s]
Feb 14 12:28:51.194: INFO: Created: latency-svc-t79sw
Feb 14 12:28:51.316: INFO: Got endpoints: latency-svc-t79sw [224.143103ms]
Feb 14 12:28:51.364: INFO: Created: latency-svc-kdh6g
Feb 14 12:28:51.403: INFO: Got endpoints: latency-svc-kdh6g [1.266383626s]
Feb 14 12:28:51.620: INFO: Created: latency-svc-4n57g
Feb 14 12:28:51.620: INFO: Got endpoints: latency-svc-4n57g [1.482397801s]
Feb 14 12:28:51.740: INFO: Created: latency-svc-68c9f
Feb 14 12:28:51.771: INFO: Got endpoints: latency-svc-68c9f [1.634360505s]
Feb 14 12:28:51.944: INFO: Created: latency-svc-c49c4
Feb 14 12:28:51.977: INFO: Got endpoints: latency-svc-c49c4 [1.83966475s]
Feb 14 12:28:52.057: INFO: Created: latency-svc-8wcfp
Feb 14 12:28:52.223: INFO: Got endpoints: latency-svc-8wcfp [2.085548893s]
Feb 14 12:28:52.268: INFO: Created: latency-svc-gzxk9
Feb 14 12:28:52.462: INFO: Got endpoints: latency-svc-gzxk9 [2.325044682s]
Feb 14 12:28:52.517: INFO: Created: latency-svc-cw9zb
Feb 14 12:28:52.530: INFO: Got endpoints: latency-svc-cw9zb [2.392978036s]
Feb 14 12:28:52.695: INFO: Created: latency-svc-4mdjm
Feb 14 12:28:52.697: INFO: Got endpoints: latency-svc-4mdjm [2.560308679s]
Feb 14 12:28:52.932: INFO: Created: latency-svc-pddz4
Feb 14 12:28:52.964: INFO: Got endpoints: latency-svc-pddz4 [2.827183718s]
Feb 14 12:28:53.039: INFO: Created: latency-svc-w5c69
Feb 14 12:28:53.176: INFO: Got endpoints: latency-svc-w5c69 [3.038663521s]
Feb 14 12:28:53.216: INFO: Created: latency-svc-dcflr
Feb 14 12:28:53.222: INFO: Got endpoints: latency-svc-dcflr [3.085133494s]
Feb 14 12:28:53.394: INFO: Created: latency-svc-j2rhb
Feb 14 12:28:53.442: INFO: Got endpoints: latency-svc-j2rhb [3.305099462s]
Feb 14 12:28:53.653: INFO: Created: latency-svc-dczqb
Feb 14 12:28:53.663: INFO: Got endpoints: latency-svc-dczqb [2.806836249s]
Feb 14 12:28:53.826: INFO: Created: latency-svc-jmfbc
Feb 14 12:28:53.846: INFO: Got endpoints: latency-svc-jmfbc [2.656375482s]
Feb 14 12:28:53.889: INFO: Created: latency-svc-l97kz
Feb 14 12:28:54.037: INFO: Got endpoints: latency-svc-l97kz [2.720031722s]
Feb 14 12:28:54.079: INFO: Created: latency-svc-2cqtr
Feb 14 12:28:54.091: INFO: Got endpoints: latency-svc-2cqtr [2.687697898s]
Feb 14 12:28:54.281: INFO: Created: latency-svc-gckwh
Feb 14 12:28:54.334: INFO: Got endpoints: latency-svc-gckwh [2.714557756s]
Feb 14 12:28:54.539: INFO: Created: latency-svc-4zn85
Feb 14 12:28:54.554: INFO: Got endpoints: latency-svc-4zn85 [2.783137014s]
Feb 14 12:28:54.713: INFO: Created: latency-svc-wsmqr
Feb 14 12:28:54.765: INFO: Created: latency-svc-7kjp6
Feb 14 12:28:54.771: INFO: Got endpoints: latency-svc-wsmqr [2.794557584s]
Feb 14 12:28:54.796: INFO: Got endpoints: latency-svc-7kjp6 [2.572998655s]
Feb 14 12:28:54.910: INFO: Created: latency-svc-bp47w
Feb 14 12:28:54.934: INFO: Got endpoints: latency-svc-bp47w [2.471351303s]
Feb 14 12:28:55.007: INFO: Created: latency-svc-sntsf
Feb 14 12:28:55.200: INFO: Got endpoints: latency-svc-sntsf [2.669490929s]
Feb 14 12:28:55.226: INFO: Created: latency-svc-44sdn
Feb 14 12:28:55.247: INFO: Got endpoints: latency-svc-44sdn [2.549795278s]
Feb 14 12:28:55.381: INFO: Created: latency-svc-k6pmb
Feb 14 12:28:55.405: INFO: Got endpoints: latency-svc-k6pmb [2.439945277s]
Feb 14 12:28:55.648: INFO: Created: latency-svc-bqwln
Feb 14 12:28:55.710: INFO: Got endpoints: latency-svc-bqwln [2.533126232s]
Feb 14 12:28:55.744: INFO: Created: latency-svc-sx2zh
Feb 14 12:28:55.856: INFO: Got endpoints: latency-svc-sx2zh [2.633399517s]
Feb 14 12:28:55.874: INFO: Created: latency-svc-fzxmb
Feb 14 12:28:55.920: INFO: Got endpoints: latency-svc-fzxmb [2.477668608s]
Feb 14 12:28:56.119: INFO: Created: latency-svc-cw9tp
Feb 14 12:28:56.133: INFO: Got endpoints: latency-svc-cw9tp [2.469645713s]
Feb 14 12:28:56.347: INFO: Created: latency-svc-668bb
Feb 14 12:28:56.355: INFO: Got endpoints: latency-svc-668bb [2.509029457s]
Feb 14 12:28:56.410: INFO: Created: latency-svc-r958k
Feb 14 12:28:56.541: INFO: Got endpoints: latency-svc-r958k [2.503906229s]
Feb 14 12:28:56.604: INFO: Created: latency-svc-mzd9j
Feb 14 12:28:56.606: INFO: Got endpoints: latency-svc-mzd9j [2.514251717s]
Feb 14 12:28:56.765: INFO: Created: latency-svc-7rddv
Feb 14 12:28:56.807: INFO: Got endpoints: latency-svc-7rddv [2.471729474s]
Feb 14 12:28:56.834: INFO: Created: latency-svc-58sxk
Feb 14 12:28:57.081: INFO: Got endpoints: latency-svc-58sxk [2.526841999s]
Feb 14 12:28:57.161: INFO: Created: latency-svc-zh9km
Feb 14 12:28:57.381: INFO: Got endpoints: latency-svc-zh9km [2.609107169s]
Feb 14 12:28:57.429: INFO: Created: latency-svc-2t59p
Feb 14 12:28:57.442: INFO: Got endpoints: latency-svc-2t59p [2.646003114s]
Feb 14 12:28:57.591: INFO: Created: latency-svc-wckx9
Feb 14 12:28:57.619: INFO: Got endpoints: latency-svc-wckx9 [2.684286947s]
Feb 14 12:28:57.692: INFO: Created: latency-svc-srlv8
Feb 14 12:28:57.801: INFO: Got endpoints: latency-svc-srlv8 [2.599769759s]
Feb 14 12:28:57.848: INFO: Created: latency-svc-gp9s9
Feb 14 12:28:58.007: INFO: Got endpoints: latency-svc-gp9s9 [2.759859686s]
Feb 14 12:28:58.015: INFO: Created: latency-svc-h4tck
Feb 14 12:28:58.086: INFO: Got endpoints: latency-svc-h4tck [2.680163797s]
Feb 14 12:28:58.247: INFO: Created: latency-svc-4qwrm
Feb 14 12:28:58.263: INFO: Got endpoints: latency-svc-4qwrm [2.55259868s]
Feb 14 12:28:58.353: INFO: Created: latency-svc-bmd7p
Feb 14 12:28:58.419: INFO: Got endpoints: latency-svc-bmd7p [2.561933408s]
Feb 14 12:28:58.456: INFO: Created: latency-svc-qgkb8
Feb 14 12:28:58.507: INFO: Got endpoints: latency-svc-qgkb8 [2.58658928s]
Feb 14 12:28:58.710: INFO: Created: latency-svc-ntblv
Feb 14 12:28:58.734: INFO: Got endpoints: latency-svc-ntblv [2.600947516s]
Feb 14 12:28:58.896: INFO: Created: latency-svc-z9jwv
Feb 14 12:28:58.906: INFO: Got endpoints: latency-svc-z9jwv [2.550030009s]
Feb 14 12:28:59.151: INFO: Created: latency-svc-srqtg
Feb 14 12:28:59.186: INFO: Got endpoints: latency-svc-srqtg [2.644941969s]
Feb 14 12:28:59.228: INFO: Created: latency-svc-pcc2l
Feb 14 12:28:59.375: INFO: Got endpoints: latency-svc-pcc2l [2.768509016s]
Feb 14 12:28:59.411: INFO: Created: latency-svc-8p774
Feb 14 12:28:59.443: INFO: Got endpoints: latency-svc-8p774 [2.635891063s]
Feb 14 12:28:59.565: INFO: Created: latency-svc-h7krj
Feb 14 12:28:59.580: INFO: Got endpoints: latency-svc-h7krj [2.498645273s]
Feb 14 12:28:59.646: INFO: Created: latency-svc-hrc87
Feb 14 12:28:59.802: INFO: Got endpoints: latency-svc-hrc87 [2.420668448s]
Feb 14 12:28:59.836: INFO: Created: latency-svc-6hbvx
Feb 14 12:28:59.857: INFO: Got endpoints: latency-svc-6hbvx [2.414485309s]
Feb 14 12:28:59.892: INFO: Created: latency-svc-88tm8
Feb 14 12:29:00.107: INFO: Got endpoints: latency-svc-88tm8 [2.487317664s]
Feb 14 12:29:00.157: INFO: Created: latency-svc-v7nf2
Feb 14 12:29:00.181: INFO: Got endpoints: latency-svc-v7nf2 [2.380480569s]
Feb 14 12:29:04.673: INFO: Created: latency-svc-8pbqk
Feb 14 12:29:04.832: INFO: Got endpoints: latency-svc-8pbqk [6.825002604s]
Feb 14 12:29:04.883: INFO: Created: latency-svc-dszbf
Feb 14 12:29:04.909: INFO: Got endpoints: latency-svc-dszbf [6.823181915s]
Feb 14 12:29:05.212: INFO: Created: latency-svc-26rdm
Feb 14 12:29:05.279: INFO: Got endpoints: latency-svc-26rdm [7.016058367s]
Feb 14 12:29:05.424: INFO: Created: latency-svc-vklfv
Feb 14 12:29:05.447: INFO: Got endpoints: latency-svc-vklfv [7.028096862s]
Feb 14 12:29:05.689: INFO: Created: latency-svc-sqsxt
Feb 14 12:29:05.721: INFO: Got endpoints: latency-svc-sqsxt [7.21347832s]
Feb 14 12:29:05.771: INFO: Created: latency-svc-qc9wb
Feb 14 12:29:05.868: INFO: Got endpoints: latency-svc-qc9wb [7.134360187s]
Feb 14 12:29:05.903: INFO: Created: latency-svc-pncjq
Feb 14 12:29:05.909: INFO: Got endpoints: latency-svc-pncjq [7.003629462s]
Feb 14 12:29:06.049: INFO: Created: latency-svc-f58dv
Feb 14 12:29:06.085: INFO: Got endpoints: latency-svc-f58dv [6.898650766s]
Feb 14 12:29:06.127: INFO: Created: latency-svc-2sspx
Feb 14 12:29:06.215: INFO: Got endpoints: latency-svc-2sspx [6.84000492s]
Feb 14 12:29:06.253: INFO: Created: latency-svc-hvgwt
Feb 14 12:29:06.254: INFO: Got endpoints: latency-svc-hvgwt [6.811015819s]
Feb 14 12:29:06.418: INFO: Created: latency-svc-6znkh
Feb 14 12:29:06.442: INFO: Got endpoints: latency-svc-6znkh [6.861330305s]
Feb 14 12:29:06.501: INFO: Created: latency-svc-ck78c
Feb 14 12:29:06.625: INFO: Got endpoints: latency-svc-ck78c [6.823225767s]
Feb 14 12:29:06.680: INFO: Created: latency-svc-65wls
Feb 14 12:29:06.786: INFO: Got endpoints: latency-svc-65wls [6.928325348s]
Feb 14 12:29:06.810: INFO: Created: latency-svc-c5w5x
Feb 14 12:29:06.839: INFO: Got endpoints: latency-svc-c5w5x [6.732022233s]
Feb 14 12:29:06.986: INFO: Created: latency-svc-7kdkj
Feb 14 12:29:07.009: INFO: Got endpoints: latency-svc-7kdkj [6.827224044s]
Feb 14 12:29:07.066: INFO: Created: latency-svc-t4j64
Feb 14 12:29:07.153: INFO: Got endpoints: latency-svc-t4j64 [2.320947292s]
Feb 14 12:29:07.201: INFO: Created: latency-svc-nhvbt
Feb 14 12:29:07.208: INFO: Got endpoints: latency-svc-nhvbt [2.297876735s]
Feb 14 12:29:07.239: INFO: Created: latency-svc-9cf7g
Feb 14 12:29:07.354: INFO: Got endpoints: latency-svc-9cf7g [2.07421214s]
Feb 14 12:29:07.426: INFO: Created: latency-svc-9r5jj
Feb 14 12:29:07.435: INFO: Got endpoints: latency-svc-9r5jj [1.988097162s]
Feb 14 12:29:07.614: INFO: Created: latency-svc-mhr62
Feb 14 12:29:07.741: INFO: Got endpoints: latency-svc-mhr62 [2.020132585s]
Feb 14 12:29:07.793: INFO: Created: latency-svc-2bbdx
Feb 14 12:29:07.806: INFO: Got endpoints: latency-svc-2bbdx [1.937346654s]
Feb 14 12:29:07.921: INFO: Created: latency-svc-tf92f
Feb 14 12:29:07.939: INFO: Got endpoints: latency-svc-tf92f [2.029478657s]
Feb 14 12:29:08.003: INFO: Created: latency-svc-8tbv6
Feb 14 12:29:08.137: INFO: Got endpoints: latency-svc-8tbv6 [2.051331388s]
Feb 14 12:29:08.175: INFO: Created: latency-svc-rjctr
Feb 14 12:29:08.209: INFO: Got endpoints: latency-svc-rjctr [1.993435032s]
Feb 14 12:29:08.365: INFO: Created: latency-svc-5cw6q
Feb 14 12:29:08.365: INFO: Got endpoints: latency-svc-5cw6q [2.110391732s]
Feb 14 12:29:08.551: INFO: Created: latency-svc-kfqfh
Feb 14 12:29:08.552: INFO: Got endpoints: latency-svc-kfqfh [2.109686098s]
Feb 14 12:29:08.673: INFO: Created: latency-svc-mzxcb
Feb 14 12:29:08.695: INFO: Got endpoints: latency-svc-mzxcb [2.069834916s]
Feb 14 12:29:08.746: INFO: Created: latency-svc-x29st
Feb 14 12:29:08.833: INFO: Got endpoints: latency-svc-x29st [2.046766748s]
Feb 14 12:29:08.849: INFO: Created: latency-svc-fmmz4
Feb 14 12:29:08.872: INFO: Got endpoints: latency-svc-fmmz4 [2.032970932s]
Feb 14 12:29:08.935: INFO: Created: latency-svc-vxtvx
Feb 14 12:29:09.042: INFO: Got endpoints: latency-svc-vxtvx [2.03225067s]
Feb 14 12:29:09.077: INFO: Created: latency-svc-gtnwf
Feb 14 12:29:09.117: INFO: Got endpoints: latency-svc-gtnwf [1.963531707s]
Feb 14 12:29:09.215: INFO: Created: latency-svc-x64dv
Feb 14 12:29:09.250: INFO: Got endpoints: latency-svc-x64dv [2.042561412s]
Feb 14 12:29:09.315: INFO: Created: latency-svc-qv69f
Feb 14 12:29:09.424: INFO: Created: latency-svc-b7tzl
Feb 14 12:29:09.425: INFO: Got endpoints: latency-svc-qv69f [2.070597303s]
Feb 14 12:29:09.452: INFO: Got endpoints: latency-svc-b7tzl [2.016111996s]
Feb 14 12:29:09.872: INFO: Created: latency-svc-f22jd
Feb 14 12:29:09.968: INFO: Got endpoints: latency-svc-f22jd [2.225907836s]
Feb 14 12:29:10.410: INFO: Created: latency-svc-kwcdz
Feb 14 12:29:10.651: INFO: Got endpoints: latency-svc-kwcdz [2.844931167s]
Feb 14 12:29:12.413: INFO: Created: latency-svc-pqbsz
Feb 14 12:29:12.425: INFO: Got endpoints: latency-svc-pqbsz [4.486019699s]
Feb 14 12:29:12.620: INFO: Created: latency-svc-hf7w4
Feb 14 12:29:12.624: INFO: Got endpoints: latency-svc-hf7w4 [4.485950578s]
Feb 14 12:29:12.665: INFO: Created: latency-svc-7j9vm
Feb 14 12:29:12.832: INFO: Got endpoints: latency-svc-7j9vm [4.622551261s]
Feb 14 12:29:12.852: INFO: Created: latency-svc-bkbts
Feb 14 12:29:12.866: INFO: Got endpoints: latency-svc-bkbts [4.501205746s]
Feb 14 12:29:13.064: INFO: Created: latency-svc-88sxk
Feb 14 12:29:13.064: INFO: Got endpoints: latency-svc-88sxk [4.511881825s]
Feb 14 12:29:13.148: INFO: Created: latency-svc-s5g6q
Feb 14 12:29:13.220: INFO: Got endpoints: latency-svc-s5g6q [4.523925065s]
Feb 14 12:29:13.238: INFO: Created: latency-svc-jfghr
Feb 14 12:29:13.262: INFO: Got endpoints: latency-svc-jfghr [4.429188948s]
Feb 14 12:29:13.365: INFO: Created: latency-svc-mqxcj
Feb 14 12:29:13.457: INFO: Got endpoints: latency-svc-mqxcj [4.584717616s]
Feb 14 12:29:13.465: INFO: Created: latency-svc-cgzx6
Feb 14 12:29:13.524: INFO: Got endpoints: latency-svc-cgzx6 [4.482174552s]
Feb 14 12:29:13.607: INFO: Created: latency-svc-phtsf
Feb 14 12:29:13.748: INFO: Got endpoints: latency-svc-phtsf [4.630878848s]
Feb 14 12:29:13.819: INFO: Created: latency-svc-gh6rm
Feb 14 12:29:13.975: INFO: Got endpoints: latency-svc-gh6rm [4.724242363s]
Feb 14 12:29:14.002: INFO: Created: latency-svc-xjs86
Feb 14 12:29:14.021: INFO: Got endpoints: latency-svc-xjs86 [4.595778392s]
Feb 14 12:29:14.182: INFO: Created: latency-svc-s26p9
Feb 14 12:29:14.194: INFO: Got endpoints: latency-svc-s26p9 [4.74268279s]
Feb 14 12:29:14.234: INFO: Created: latency-svc-7bv7q
Feb 14 12:29:14.374: INFO: Got endpoints: latency-svc-7bv7q [4.40542549s]
Feb 14 12:29:14.425: INFO: Created: latency-svc-5hflw
Feb 14 12:29:14.458: INFO: Got endpoints: latency-svc-5hflw [3.806193731s]
Feb 14 12:29:14.648: INFO: Created: latency-svc-fqlhk
Feb 14 12:29:14.701: INFO: Got endpoints: latency-svc-fqlhk [2.275818971s]
Feb 14 12:29:14.829: INFO: Created: latency-svc-6ff96
Feb 14 12:29:14.845: INFO: Got endpoints: latency-svc-6ff96 [2.221044254s]
Feb 14 12:29:14.879: INFO: Created: latency-svc-pfnrr
Feb 14 12:29:15.014: INFO: Got endpoints: latency-svc-pfnrr [2.182007038s]
Feb 14 12:29:15.016: INFO: Created: latency-svc-fmftp
Feb 14 12:29:15.026: INFO: Got endpoints: latency-svc-fmftp [2.159558097s]
Feb 14 12:29:15.064: INFO: Created: latency-svc-lkr2j
Feb 14 12:29:15.088: INFO: Got endpoints: latency-svc-lkr2j [2.023795686s]
Feb 14 12:29:15.209: INFO: Created: latency-svc-jc8mf
Feb 14 12:29:15.221: INFO: Got endpoints: latency-svc-jc8mf [2.000760596s]
Feb 14 12:29:15.262: INFO: Created: latency-svc-n9sk9
Feb 14 12:29:15.290: INFO: Got endpoints: latency-svc-n9sk9 [2.027659257s]
Feb 14 12:29:15.389: INFO: Created: latency-svc-d8q6r
Feb 14 12:29:15.453: INFO: Got endpoints: latency-svc-d8q6r [1.995590376s]
Feb 14 12:29:15.480: INFO: Created: latency-svc-6vt7k
Feb 14 12:29:15.565: INFO: Got endpoints: latency-svc-6vt7k [2.040089801s]
Feb 14 12:29:15.651: INFO: Created: latency-svc-ml4p9
Feb 14 12:29:15.817: INFO: Got endpoints: latency-svc-ml4p9 [2.06853414s]
Feb 14 12:29:15.856: INFO: Created: latency-svc-2ldkw
Feb 14 12:29:16.020: INFO: Got endpoints: latency-svc-2ldkw [2.044497215s]
Feb 14 12:29:16.044: INFO: Created: latency-svc-vvhbz
Feb 14 12:29:16.049: INFO: Got endpoints: latency-svc-vvhbz [2.027836298s]
Feb 14 12:29:16.312: INFO: Created: latency-svc-5pn6d
Feb 14 12:29:16.323: INFO: Got endpoints: latency-svc-5pn6d [2.128237981s]
Feb 14 12:29:16.458: INFO: Created: latency-svc-4wmrp
Feb 14 12:29:16.472: INFO: Got endpoints: latency-svc-4wmrp [2.098250162s]
Feb 14 12:29:16.581: INFO: Created: latency-svc-68jpl
Feb 14 12:29:17.595: INFO: Got endpoints: latency-svc-68jpl [3.136577306s]
Feb 14 12:29:17.764: INFO: Created: latency-svc-hjh2l
Feb 14 12:29:17.777: INFO: Got endpoints: latency-svc-hjh2l [3.075505568s]
Feb 14 12:29:19.204: INFO: Created: latency-svc-wtlrr
Feb 14 12:29:20.607: INFO: Got endpoints: latency-svc-wtlrr [5.762085705s]
Feb 14 12:29:20.628: INFO: Created: latency-svc-g2wd2
Feb 14 12:29:20.869: INFO: Created: latency-svc-kb87g
Feb 14 12:29:20.880: INFO: Got endpoints: latency-svc-g2wd2 [5.865427093s]
Feb 14 12:29:20.907: INFO: Got endpoints: latency-svc-kb87g [5.881113424s]
Feb 14 12:29:21.110: INFO: Created: latency-svc-w82fj
Feb 14 12:29:21.131: INFO: Got endpoints: latency-svc-w82fj [6.04329198s]
Feb 14 12:29:21.315: INFO: Created: latency-svc-2lbfb
Feb 14 12:29:21.321: INFO: Got endpoints: latency-svc-2lbfb [6.100421547s]
Feb 14 12:29:21.477: INFO: Created: latency-svc-pj8jp
Feb 14 12:29:21.487: INFO: Got endpoints: latency-svc-pj8jp [6.196777514s]
Feb 14 12:29:21.672: INFO: Created: latency-svc-cb96p
Feb 14 12:29:21.688: INFO: Created: latency-svc-n9nlp
Feb 14 12:29:21.701: INFO: Got endpoints: latency-svc-cb96p [6.247839216s]
Feb 14 12:29:21.702: INFO: Got endpoints: latency-svc-n9nlp [6.136857703s]
Feb 14 12:29:21.813: INFO: Created: latency-svc-ks4r4
Feb 14 12:29:21.875: INFO: Got endpoints: latency-svc-ks4r4 [6.057651643s]
Feb 14 12:29:21.883: INFO: Created: latency-svc-ct568
Feb 14 12:29:21.904: INFO: Got endpoints: latency-svc-ct568 [5.883858117s]
Feb 14 12:29:22.030: INFO: Created: latency-svc-m77vj
Feb 14 12:29:22.042: INFO: Got endpoints: latency-svc-m77vj [5.992698696s]
Feb 14 12:29:22.182: INFO: Created: latency-svc-srhpn
Feb 14 12:29:22.189: INFO: Got endpoints: latency-svc-srhpn [5.866072767s]
Feb 14 12:29:22.324: INFO: Created: latency-svc-q28pq
Feb 14 12:29:22.337: INFO: Got endpoints: latency-svc-q28pq [5.864925756s]
Feb 14 12:29:22.411: INFO: Created: latency-svc-f65b8
Feb 14 12:29:22.567: INFO: Got endpoints: latency-svc-f65b8 [4.972211538s]
Feb 14 12:29:22.829: INFO: Created: latency-svc-75b4j
Feb 14 12:29:22.872: INFO: Got endpoints: latency-svc-75b4j [5.094557397s]
Feb 14 12:29:22.971: INFO: Created: latency-svc-npkmg
Feb 14 12:29:23.008: INFO: Got endpoints: latency-svc-npkmg [2.400714898s]
Feb 14 12:29:23.053: INFO: Created: latency-svc-8vdww
Feb 14 12:29:23.164: INFO: Got endpoints: latency-svc-8vdww [2.284385123s]
Feb 14 12:29:23.182: INFO: Created: latency-svc-fwtxk
Feb 14 12:29:23.191: INFO: Got endpoints: latency-svc-fwtxk [2.284266283s]
Feb 14 12:29:23.432: INFO: Created: latency-svc-5zqhp
Feb 14 12:29:23.513: INFO: Got endpoints: latency-svc-5zqhp [2.381384779s]
Feb 14 12:29:23.748: INFO: Created: latency-svc-pg5qd
Feb 14 12:29:23.759: INFO: Got endpoints: latency-svc-pg5qd [2.437732612s]
Feb 14 12:29:23.970: INFO: Created: latency-svc-chbdh
Feb 14 12:29:23.993: INFO: Got endpoints: latency-svc-chbdh [2.505697168s]
Feb 14 12:29:24.349: INFO: Created: latency-svc-vj5lp
Feb 14 12:29:24.359: INFO: Got endpoints: latency-svc-vj5lp [2.658158758s]
Feb 14 12:29:24.559: INFO: Created: latency-svc-z6qqw
Feb 14 12:29:24.562: INFO: Got endpoints: latency-svc-z6qqw [2.859780386s]
Feb 14 12:29:24.748: INFO: Created: latency-svc-hwb2m
Feb 14 12:29:24.788: INFO: Got endpoints: latency-svc-hwb2m [2.912385678s]
Feb 14 12:29:24.868: INFO: Created: latency-svc-bh98b
Feb 14 12:29:24.889: INFO: Got endpoints: latency-svc-bh98b [2.984226417s]
Feb 14 12:29:25.052: INFO: Created: latency-svc-mm4b5
Feb 14 12:29:25.055: INFO: Got endpoints: latency-svc-mm4b5 [3.012822921s]
Feb 14 12:29:25.227: INFO: Created: latency-svc-p4lv9
Feb 14 12:29:25.233: INFO: Got endpoints: latency-svc-p4lv9 [3.043602348s]
Feb 14 12:29:25.375: INFO: Created: latency-svc-mhlxz
Feb 14 12:29:25.402: INFO: Got endpoints: latency-svc-mhlxz [3.064483244s]
Feb 14 12:29:25.556: INFO: Created: latency-svc-9cn4k
Feb 14 12:29:25.570: INFO: Got endpoints: latency-svc-9cn4k [3.002787524s]
Feb 14 12:29:25.712: INFO: Created: latency-svc-5drq2
Feb 14 12:29:25.724: INFO: Got endpoints: latency-svc-5drq2 [2.851262875s]
Feb 14 12:29:25.926: INFO: Created: latency-svc-ft4rk
Feb 14 12:29:25.942: INFO: Got endpoints: latency-svc-ft4rk [2.933440553s]
Feb 14 12:29:26.101: INFO: Created: latency-svc-l87s4
Feb 14 12:29:26.125: INFO: Got endpoints: latency-svc-l87s4 [2.960131067s]
Feb 14 12:29:26.264: INFO: Created: latency-svc-mlf4t
Feb 14 12:29:26.302: INFO: Got endpoints: latency-svc-mlf4t [3.109993326s]
Feb 14 12:29:26.432: INFO: Created: latency-svc-hs6df
Feb 14 12:29:26.454: INFO: Got endpoints: latency-svc-hs6df [2.941340575s]
Feb 14 12:29:26.670: INFO: Created: latency-svc-jpbtm
Feb 14 12:29:26.687: INFO: Got endpoints: latency-svc-jpbtm [2.927912991s]
Feb 14 12:29:26.854: INFO: Created: latency-svc-44dt2
Feb 14 12:29:26.871: INFO: Got endpoints: latency-svc-44dt2 [2.877002095s]
Feb 14 12:29:27.153: INFO: Created: latency-svc-9m2w2
Feb 14 12:29:27.172: INFO: Got endpoints: latency-svc-9m2w2 [2.812401102s]
Feb 14 12:29:27.217: INFO: Created: latency-svc-sj9k7
Feb 14 12:29:27.228: INFO: Got endpoints: latency-svc-sj9k7 [2.665724604s]
Feb 14 12:29:27.398: INFO: Created: latency-svc-7bpsr
Feb 14 12:29:27.413: INFO: Got endpoints: latency-svc-7bpsr [2.624655707s]
Feb 14 12:29:27.536: INFO: Created: latency-svc-7p7tk
Feb 14 12:29:27.551: INFO: Got endpoints: latency-svc-7p7tk [2.661899169s]
Feb 14 12:29:27.588: INFO: Created: latency-svc-n2j7f
Feb 14 12:29:27.606: INFO: Got endpoints: latency-svc-n2j7f [2.551570433s]
Feb 14 12:29:27.694: INFO: Created: latency-svc-tlbzb
Feb 14 12:29:27.715: INFO: Got endpoints: latency-svc-tlbzb [2.482151429s]
Feb 14 12:29:27.801: INFO: Created: latency-svc-n64l7
Feb 14 12:29:27.877: INFO: Got endpoints: latency-svc-n64l7 [2.475074104s]
Feb 14 12:29:27.899: INFO: Created: latency-svc-vgrks
Feb 14 12:29:27.899: INFO: Got endpoints: latency-svc-vgrks [2.328399971s]
Feb 14 12:29:27.930: INFO: Created: latency-svc-57grr
Feb 14 12:29:27.944: INFO: Got endpoints: latency-svc-57grr [2.219723474s]
Feb 14 12:29:28.121: INFO: Created: latency-svc-dd7qt
Feb 14 12:29:28.135: INFO: Got endpoints: latency-svc-dd7qt [2.192523573s]
Feb 14 12:29:28.207: INFO: Created: latency-svc-hc9q4
Feb 14 12:29:28.262: INFO: Got endpoints: latency-svc-hc9q4 [2.136565922s]
Feb 14 12:29:28.345: INFO: Created: latency-svc-82z25
Feb 14 12:29:28.354: INFO: Got endpoints: latency-svc-82z25 [2.05205632s]
Feb 14 12:29:28.444: INFO: Created: latency-svc-cfh2l
Feb 14 12:29:28.458: INFO: Got endpoints: latency-svc-cfh2l [2.003345074s]
Feb 14 12:29:28.523: INFO: Created: latency-svc-dwbth
Feb 14 12:29:28.722: INFO: Got endpoints: latency-svc-dwbth [2.034165756s]
Feb 14 12:29:28.746: INFO: Created: latency-svc-24sl2
Feb 14 12:29:28.766: INFO: Got endpoints: latency-svc-24sl2 [1.894418257s]
Feb 14 12:29:28.933: INFO: Created: latency-svc-zblbl
Feb 14 12:29:28.950: INFO: Got endpoints: latency-svc-zblbl [1.778035382s]
Feb 14 12:29:28.964: INFO: Created: latency-svc-lsmfn
Feb 14 12:29:28.980: INFO: Got endpoints: latency-svc-lsmfn [1.751414516s]
Feb 14 12:29:29.086: INFO: Created: latency-svc-zv5vx
Feb 14 12:29:29.117: INFO: Got endpoints: latency-svc-zv5vx [1.704337727s]
Feb 14 12:29:29.303: INFO: Created: latency-svc-5nrsd
Feb 14 12:29:29.324: INFO: Got endpoints: latency-svc-5nrsd [1.773096633s]
Feb 14 12:29:29.381: INFO: Created: latency-svc-qpsrh
Feb 14 12:29:29.394: INFO: Got endpoints: latency-svc-qpsrh [1.788044117s]
Feb 14 12:29:29.509: INFO: Created: latency-svc-k6qh7
Feb 14 12:29:29.513: INFO: Got endpoints: latency-svc-k6qh7 [1.797523538s]
Feb 14 12:29:29.773: INFO: Created: latency-svc-tk674
Feb 14 12:29:29.799: INFO: Got endpoints: latency-svc-tk674 [1.921192191s]
Feb 14 12:29:29.837: INFO: Created: latency-svc-zq79g
Feb 14 12:29:29.932: INFO: Got endpoints: latency-svc-zq79g [2.03280966s]
Feb 14 12:29:29.955: INFO: Created: latency-svc-zc9mn
Feb 14 12:29:29.968: INFO: Got endpoints: latency-svc-zc9mn [2.024433997s]
Feb 14 12:29:30.023: INFO: Created: latency-svc-4lgr2
Feb 14 12:29:30.115: INFO: Got endpoints: latency-svc-4lgr2 [1.980044842s]
Feb 14 12:29:30.151: INFO: Created: latency-svc-l4gv6
Feb 14 12:29:30.159: INFO: Got endpoints: latency-svc-l4gv6 [1.896798765s]
Feb 14 12:29:30.214: INFO: Created: latency-svc-7hjd6
Feb 14 12:29:30.296: INFO: Got endpoints: latency-svc-7hjd6 [1.942171187s]
Feb 14 12:29:30.331: INFO: Created: latency-svc-mnmsb
Feb 14 12:29:30.339: INFO: Got endpoints: latency-svc-mnmsb [1.881293985s]
Feb 14 12:29:30.684: INFO: Created: latency-svc-hdsrg
Feb 14 12:29:30.749: INFO: Got endpoints: latency-svc-hdsrg [2.026809718s]
Feb 14 12:29:30.999: INFO: Created: latency-svc-jrghc
Feb 14 12:29:31.118: INFO: Got endpoints: latency-svc-jrghc [2.351837564s]
Feb 14 12:29:31.164: INFO: Created: latency-svc-xkcx9
Feb 14 12:29:31.215: INFO: Got endpoints: latency-svc-xkcx9 [2.264316459s]
Feb 14 12:29:31.460: INFO: Created: latency-svc-hg2s9
Feb 14 12:29:31.571: INFO: Got endpoints: latency-svc-hg2s9 [2.591320181s]
Feb 14 12:29:31.658: INFO: Created: latency-svc-s46m9
Feb 14 12:29:31.761: INFO: Got endpoints: latency-svc-s46m9 [2.642733946s]
Feb 14 12:29:31.809: INFO: Created: latency-svc-m9sd4
Feb 14 12:29:31.833: INFO: Got endpoints: latency-svc-m9sd4 [2.508294829s]
Feb 14 12:29:31.937: INFO: Created: latency-svc-lndr9
Feb 14 12:29:31.969: INFO: Got endpoints: latency-svc-lndr9 [2.574417552s]
Feb 14 12:29:32.026: INFO: Created: latency-svc-7csch
Feb 14 12:29:32.106: INFO: Got endpoints: latency-svc-7csch [2.593088413s]
Feb 14 12:29:32.124: INFO: Created: latency-svc-bbt6w
Feb 14 12:29:32.156: INFO: Got endpoints: latency-svc-bbt6w [2.357428058s]
Feb 14 12:29:32.205: INFO: Created: latency-svc-9wrff
Feb 14 12:29:32.301: INFO: Got endpoints: latency-svc-9wrff [2.368591639s]
Feb 14 12:29:32.348: INFO: Created: latency-svc-pmhrp
Feb 14 12:29:32.351: INFO: Got endpoints: latency-svc-pmhrp [2.382385684s]
Feb 14 12:29:32.385: INFO: Created: latency-svc-pgqjm
Feb 14 12:29:32.482: INFO: Got endpoints: latency-svc-pgqjm [2.366512007s]
Feb 14 12:29:32.511: INFO: Created: latency-svc-wmfff
Feb 14 12:29:32.551: INFO: Got endpoints: latency-svc-wmfff [2.391983594s]
Feb 14 12:29:32.551: INFO: Latencies: [224.143103ms 719.299235ms 954.783537ms 1.051872572s 1.266383626s 1.482397801s 1.634360505s 1.704337727s 1.751414516s 1.773096633s 1.778035382s 1.788044117s 1.797523538s 1.83966475s 1.881293985s 1.894418257s 1.896798765s 1.921192191s 1.937346654s 1.942171187s 1.963531707s 1.980044842s 1.988097162s 1.993435032s 1.995590376s 2.000760596s 2.003345074s 2.016111996s 2.020132585s 2.023795686s 2.024433997s 2.026809718s 2.027659257s 2.027836298s 2.029478657s 2.03225067s 2.03280966s 2.032970932s 2.034165756s 2.040089801s 2.042561412s 2.044497215s 2.046766748s 2.051331388s 2.05205632s 2.06853414s 2.069834916s 2.070597303s 2.07421214s 2.085548893s 2.098250162s 2.109686098s 2.110391732s 2.128237981s 2.136565922s 2.159558097s 2.182007038s 2.192523573s 2.219723474s 2.221044254s 2.225907836s 2.264316459s 2.275818971s 2.284266283s 2.284385123s 2.297876735s 2.320947292s 2.325044682s 2.328399971s 2.351837564s 2.357428058s 2.366512007s 2.368591639s 2.380480569s 2.381384779s 2.382385684s 2.391983594s 2.392978036s 2.400714898s 2.414485309s 2.420668448s 2.437732612s 2.439945277s 2.469645713s 2.471351303s 2.471729474s 2.475074104s 2.477668608s 2.482151429s 2.487317664s 2.498645273s 2.503906229s 2.505697168s 2.508294829s 2.509029457s 2.514251717s 2.526841999s 2.533126232s 2.549795278s 2.550030009s 2.551570433s 2.55259868s 2.560308679s 2.561933408s 2.572998655s 2.574417552s 2.58658928s 2.591320181s 2.593088413s 2.599769759s 2.600947516s 2.609107169s 2.624655707s 2.633399517s 2.635891063s 2.642733946s 2.644941969s 2.646003114s 2.656375482s 2.658158758s 2.661899169s 2.665724604s 2.669490929s 2.680163797s 2.684286947s 2.687697898s 2.714557756s 2.720031722s 2.759859686s 2.768509016s 2.783137014s 2.794557584s 2.806836249s 2.812401102s 2.827183718s 2.844931167s 2.851262875s 2.859780386s 2.877002095s 2.912385678s 2.927912991s 2.933440553s 2.941340575s 2.960131067s 2.984226417s 3.002787524s 3.012822921s 3.038663521s 3.043602348s 3.064483244s 3.075505568s 3.085133494s 3.109993326s 3.136577306s 3.305099462s 3.806193731s 4.40542549s 4.429188948s 4.482174552s 4.485950578s 4.486019699s 4.501205746s 4.511881825s 4.523925065s 4.584717616s 4.595778392s 4.622551261s 4.630878848s 4.724242363s 4.74268279s 4.972211538s 5.094557397s 5.762085705s 5.864925756s 5.865427093s 5.866072767s 5.881113424s 5.883858117s 5.992698696s 6.04329198s 6.057651643s 6.100421547s 6.136857703s 6.196777514s 6.247839216s 6.732022233s 6.811015819s 6.823181915s 6.823225767s 6.825002604s 6.827224044s 6.84000492s 6.861330305s 6.898650766s 6.928325348s 7.003629462s 7.016058367s 7.028096862s 7.134360187s 7.21347832s]
Feb 14 12:29:32.551: INFO: 50 %ile: 2.551570433s
Feb 14 12:29:32.551: INFO: 90 %ile: 6.057651643s
Feb 14 12:29:32.551: INFO: 99 %ile: 7.134360187s
Feb 14 12:29:32.551: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:29:32.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-dl9xd" for this suite.
Feb 14 12:30:26.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:30:26.819: INFO: namespace: e2e-tests-svc-latency-dl9xd, resource: bindings, ignored listing per whitelist
Feb 14 12:30:26.870: INFO: namespace e2e-tests-svc-latency-dl9xd deletion completed in 54.176266227s

• [SLOW TEST:110.491 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:30:26.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Feb 14 12:30:27.020: INFO: PodSpec: initContainers in spec.initContainers
Feb 14 12:31:47.839: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c7a6c521-4f25-11ea-af88-0242ac110007", GenerateName:"", Namespace:"e2e-tests-init-container-lhmzg", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-lhmzg/pods/pod-init-c7a6c521-4f25-11ea-af88-0242ac110007", UID:"c7a7c4a0-4f25-11ea-a994-fa163e34d433", ResourceVersion:"21646194", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717280227, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"20121484"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-cxbqk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000f7e000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-cxbqk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-cxbqk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-cxbqk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001d9c358), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000b0bda0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001d9c4e0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001d9c550)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001d9c558), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001d9c55c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717280227, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717280227, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717280227, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717280227, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc001398120), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000241570)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0002415e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://dd86f4a581f6bb4d2003839bc99c850b36dc50fa6a7b1b43d07a13517661f72e"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0013982e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0013981c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:31:47.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-lhmzg" for this suite.
Feb 14 12:32:09.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:32:10.049: INFO: namespace: e2e-tests-init-container-lhmzg, resource: bindings, ignored listing per whitelist
Feb 14 12:32:10.075: INFO: namespace e2e-tests-init-container-lhmzg deletion completed in 22.203600297s

• [SLOW TEST:103.205 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:32:10.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Feb 14 12:32:10.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-r776l'
Feb 14 12:32:10.836: INFO: stderr: ""
Feb 14 12:32:10.836: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Feb 14 12:32:11.861: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:32:11.861: INFO: Found 0 / 1
Feb 14 12:32:12.862: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:32:12.862: INFO: Found 0 / 1
Feb 14 12:32:13.851: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:32:13.851: INFO: Found 0 / 1
Feb 14 12:32:14.850: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:32:14.850: INFO: Found 0 / 1
Feb 14 12:32:15.865: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:32:15.865: INFO: Found 0 / 1
Feb 14 12:32:16.849: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:32:16.850: INFO: Found 0 / 1
Feb 14 12:32:17.861: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:32:17.861: INFO: Found 0 / 1
Feb 14 12:32:18.847: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:32:18.848: INFO: Found 0 / 1
Feb 14 12:32:19.877: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:32:19.878: INFO: Found 0 / 1
Feb 14 12:32:20.931: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:32:20.931: INFO: Found 1 / 1
Feb 14 12:32:20.931: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 14 12:32:20.958: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 12:32:20.958: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb 14 12:32:20.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-q24fv redis-master --namespace=e2e-tests-kubectl-r776l'
Feb 14 12:32:21.281: INFO: stderr: ""
Feb 14 12:32:21.281: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 14 Feb 12:32:19.583 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 14 Feb 12:32:19.583 # Server started, Redis version 3.2.12\n1:M 14 Feb 12:32:19.584 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 14 Feb 12:32:19.584 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb 14 12:32:21.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-q24fv redis-master --namespace=e2e-tests-kubectl-r776l --tail=1'
Feb 14 12:32:21.476: INFO: stderr: ""
Feb 14 12:32:21.476: INFO: stdout: "1:M 14 Feb 12:32:19.584 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb 14 12:32:21.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-q24fv redis-master --namespace=e2e-tests-kubectl-r776l --limit-bytes=1'
Feb 14 12:32:21.659: INFO: stderr: ""
Feb 14 12:32:21.659: INFO: stdout: " "
STEP: exposing timestamps
Feb 14 12:32:21.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-q24fv redis-master --namespace=e2e-tests-kubectl-r776l --tail=1 --timestamps'
Feb 14 12:32:21.860: INFO: stderr: ""
Feb 14 12:32:21.860: INFO: stdout: "2020-02-14T12:32:19.586130035Z 1:M 14 Feb 12:32:19.584 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb 14 12:32:24.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-q24fv redis-master --namespace=e2e-tests-kubectl-r776l --since=1s'
Feb 14 12:32:24.673: INFO: stderr: ""
Feb 14 12:32:24.674: INFO: stdout: ""
Feb 14 12:32:24.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-q24fv redis-master --namespace=e2e-tests-kubectl-r776l --since=24h'
Feb 14 12:32:24.834: INFO: stderr: ""
Feb 14 12:32:24.835: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 14 Feb 12:32:19.583 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 14 Feb 12:32:19.583 # Server started, Redis version 3.2.12\n1:M 14 Feb 12:32:19.584 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 14 Feb 12:32:19.584 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Feb 14 12:32:24.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-r776l'
Feb 14 12:32:24.987: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 12:32:24.988: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb 14 12:32:24.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-r776l'
Feb 14 12:32:25.220: INFO: stderr: "No resources found.\n"
Feb 14 12:32:25.220: INFO: stdout: ""
Feb 14 12:32:25.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-r776l -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 14 12:32:25.402: INFO: stderr: ""
Feb 14 12:32:25.402: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:32:25.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-r776l" for this suite.
Feb 14 12:32:47.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:32:47.683: INFO: namespace: e2e-tests-kubectl-r776l, resource: bindings, ignored listing per whitelist
Feb 14 12:32:47.708: INFO: namespace e2e-tests-kubectl-r776l deletion completed in 22.295267351s

• [SLOW TEST:37.632 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:32:47.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Feb 14 12:32:47.941: INFO: Waiting up to 5m0s for pod "client-containers-1ba3de9b-4f26-11ea-af88-0242ac110007" in namespace "e2e-tests-containers-9tkhm" to be "success or failure"
Feb 14 12:32:47.957: INFO: Pod "client-containers-1ba3de9b-4f26-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.314213ms
Feb 14 12:32:50.059: INFO: Pod "client-containers-1ba3de9b-4f26-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117038265s
Feb 14 12:32:52.081: INFO: Pod "client-containers-1ba3de9b-4f26-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139195045s
Feb 14 12:32:54.098: INFO: Pod "client-containers-1ba3de9b-4f26-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156395112s
Feb 14 12:32:56.115: INFO: Pod "client-containers-1ba3de9b-4f26-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.173326217s
Feb 14 12:32:58.136: INFO: Pod "client-containers-1ba3de9b-4f26-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.194201432s
STEP: Saw pod success
Feb 14 12:32:58.136: INFO: Pod "client-containers-1ba3de9b-4f26-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:32:58.145: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-1ba3de9b-4f26-11ea-af88-0242ac110007 container test-container: 
STEP: delete the pod
Feb 14 12:32:58.308: INFO: Waiting for pod client-containers-1ba3de9b-4f26-11ea-af88-0242ac110007 to disappear
Feb 14 12:32:58.379: INFO: Pod client-containers-1ba3de9b-4f26-11ea-af88-0242ac110007 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:32:58.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-9tkhm" for this suite.
Feb 14 12:33:06.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:33:06.683: INFO: namespace: e2e-tests-containers-9tkhm, resource: bindings, ignored listing per whitelist
Feb 14 12:33:06.747: INFO: namespace e2e-tests-containers-9tkhm deletion completed in 8.248092366s

• [SLOW TEST:19.038 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:33:06.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Feb 14 12:33:17.080: INFO: Pod pod-hostip-26f98779-4f26-11ea-af88-0242ac110007 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:33:17.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-pjfcw" for this suite.
Feb 14 12:33:41.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:33:41.444: INFO: namespace: e2e-tests-pods-pjfcw, resource: bindings, ignored listing per whitelist
Feb 14 12:33:41.481: INFO: namespace e2e-tests-pods-pjfcw deletion completed in 24.386341056s

• [SLOW TEST:34.734 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:33:41.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Feb 14 12:33:41.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-7c8jh run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 14 12:33:52.678: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0214 12:33:51.437012    3314 log.go:172] (0xc000138790) (0xc0002ca3c0) Create stream\nI0214 12:33:51.437286    3314 log.go:172] (0xc000138790) (0xc0002ca3c0) Stream added, broadcasting: 1\nI0214 12:33:51.444634    3314 log.go:172] (0xc000138790) Reply frame received for 1\nI0214 12:33:51.444774    3314 log.go:172] (0xc000138790) (0xc0004985a0) Create stream\nI0214 12:33:51.444784    3314 log.go:172] (0xc000138790) (0xc0004985a0) Stream added, broadcasting: 3\nI0214 12:33:51.446401    3314 log.go:172] (0xc000138790) Reply frame received for 3\nI0214 12:33:51.446434    3314 log.go:172] (0xc000138790) (0xc0007a8500) Create stream\nI0214 12:33:51.446442    3314 log.go:172] (0xc000138790) (0xc0007a8500) Stream added, broadcasting: 5\nI0214 12:33:51.447689    3314 log.go:172] (0xc000138790) Reply frame received for 5\nI0214 12:33:51.447732    3314 log.go:172] (0xc000138790) (0xc0002ca460) Create stream\nI0214 12:33:51.447748    3314 log.go:172] (0xc000138790) (0xc0002ca460) Stream added, broadcasting: 7\nI0214 12:33:51.449011    3314 log.go:172] (0xc000138790) Reply frame received for 7\nI0214 12:33:51.449649    3314 log.go:172] (0xc0004985a0) (3) Writing data frame\nI0214 12:33:51.449986    3314 log.go:172] (0xc0004985a0) (3) Writing data frame\nI0214 12:33:51.457007    3314 log.go:172] (0xc000138790) Data frame received for 5\nI0214 12:33:51.457033    3314 log.go:172] (0xc0007a8500) (5) Data frame handling\nI0214 12:33:51.457058    3314 log.go:172] (0xc0007a8500) (5) Data frame sent\nI0214 12:33:51.461263    3314 log.go:172] (0xc000138790) Data frame received for 5\nI0214 12:33:51.461282    3314 log.go:172] (0xc0007a8500) (5) Data frame handling\nI0214 12:33:51.461301    3314 log.go:172] (0xc0007a8500) (5) Data frame sent\nI0214 12:33:52.602849    3314 log.go:172] (0xc000138790) Data frame received for 1\nI0214 12:33:52.602960    3314 log.go:172] (0xc000138790) (0xc0004985a0) Stream removed, broadcasting: 3\nI0214 12:33:52.603039    3314 log.go:172] (0xc0002ca3c0) (1) Data frame handling\nI0214 12:33:52.603089    3314 log.go:172] (0xc0002ca3c0) (1) Data frame sent\nI0214 12:33:52.603135    3314 log.go:172] (0xc000138790) (0xc0002ca3c0) Stream removed, broadcasting: 1\nI0214 12:33:52.604018    3314 log.go:172] (0xc000138790) (0xc0007a8500) Stream removed, broadcasting: 5\nI0214 12:33:52.605581    3314 log.go:172] (0xc000138790) (0xc0002ca460) Stream removed, broadcasting: 7\nI0214 12:33:52.605631    3314 log.go:172] (0xc000138790) (0xc0002ca3c0) Stream removed, broadcasting: 1\nI0214 12:33:52.605651    3314 log.go:172] (0xc000138790) (0xc0004985a0) Stream removed, broadcasting: 3\nI0214 12:33:52.605669    3314 log.go:172] (0xc000138790) (0xc0007a8500) Stream removed, broadcasting: 5\nI0214 12:33:52.605691    3314 log.go:172] (0xc000138790) (0xc0002ca460) Stream removed, broadcasting: 7\nI0214 12:33:52.606850    3314 log.go:172] (0xc000138790) Go away received\n"
Feb 14 12:33:52.679: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:33:55.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7c8jh" for this suite.
Feb 14 12:34:02.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:34:02.182: INFO: namespace: e2e-tests-kubectl-7c8jh, resource: bindings, ignored listing per whitelist
Feb 14 12:34:02.277: INFO: namespace e2e-tests-kubectl-7c8jh deletion completed in 6.298340277s

• [SLOW TEST:20.796 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:34:02.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0214 12:34:33.179151       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 14 12:34:33.179: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:34:33.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-vbq5m" for this suite.
Feb 14 12:34:41.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:34:41.467: INFO: namespace: e2e-tests-gc-vbq5m, resource: bindings, ignored listing per whitelist
Feb 14 12:34:41.564: INFO: namespace e2e-tests-gc-vbq5m deletion completed in 8.378825899s

• [SLOW TEST:39.286 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:34:41.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-5f928a24-4f26-11ea-af88-0242ac110007
STEP: Creating configMap with name cm-test-opt-upd-5f928de8-4f26-11ea-af88-0242ac110007
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-5f928a24-4f26-11ea-af88-0242ac110007
STEP: Updating configmap cm-test-opt-upd-5f928de8-4f26-11ea-af88-0242ac110007
STEP: Creating configMap with name cm-test-opt-create-5f928e5c-4f26-11ea-af88-0242ac110007
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:36:18.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-ghrrs" for this suite.
Feb 14 12:37:00.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:37:01.078: INFO: namespace: e2e-tests-projected-ghrrs, resource: bindings, ignored listing per whitelist
Feb 14 12:37:01.178: INFO: namespace e2e-tests-projected-ghrrs deletion completed in 42.439291517s

• [SLOW TEST:139.614 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:37:01.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 14 12:37:01.454: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.239394ms)
Feb 14 12:37:01.460: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.45559ms)
Feb 14 12:37:01.465: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.924253ms)
Feb 14 12:37:01.470: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.297836ms)
Feb 14 12:37:01.477: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.715897ms)
Feb 14 12:37:01.546: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 69.682383ms)
Feb 14 12:37:01.559: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.684289ms)
Feb 14 12:37:01.568: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.412385ms)
Feb 14 12:37:01.572: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.439933ms)
Feb 14 12:37:01.577: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.267656ms)
Feb 14 12:37:01.581: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.104652ms)
Feb 14 12:37:01.586: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.981839ms)
Feb 14 12:37:01.590: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.696086ms)
Feb 14 12:37:01.595: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.776236ms)
Feb 14 12:37:01.601: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.055414ms)
Feb 14 12:37:01.606: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.798065ms)
Feb 14 12:37:01.612: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.003894ms)
Feb 14 12:37:01.632: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.933318ms)
Feb 14 12:37:01.657: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 24.905217ms)
Feb 14 12:37:01.667: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.428087ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:37:01.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-kxfjd" for this suite.
Feb 14 12:37:07.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:37:07.734: INFO: namespace: e2e-tests-proxy-kxfjd, resource: bindings, ignored listing per whitelist
Feb 14 12:37:07.897: INFO: namespace e2e-tests-proxy-kxfjd deletion completed in 6.224539035s

• [SLOW TEST:6.719 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:37:07.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-b6c246d1-4f26-11ea-af88-0242ac110007
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-b6c246d1-4f26-11ea-af88-0242ac110007
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:37:20.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-hx5f5" for this suite.
Feb 14 12:37:44.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:37:45.085: INFO: namespace: e2e-tests-configmap-hx5f5, resource: bindings, ignored listing per whitelist
Feb 14 12:37:45.116: INFO: namespace e2e-tests-configmap-hx5f5 deletion completed in 24.308606328s

• [SLOW TEST:37.219 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:37:45.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 14 12:37:45.363: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cceaa448-4f26-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-p88s2" to be "success or failure"
Feb 14 12:37:45.382: INFO: Pod "downwardapi-volume-cceaa448-4f26-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 19.235365ms
Feb 14 12:37:47.425: INFO: Pod "downwardapi-volume-cceaa448-4f26-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061582435s
Feb 14 12:37:49.444: INFO: Pod "downwardapi-volume-cceaa448-4f26-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080800741s
Feb 14 12:37:51.461: INFO: Pod "downwardapi-volume-cceaa448-4f26-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098354779s
Feb 14 12:37:53.470: INFO: Pod "downwardapi-volume-cceaa448-4f26-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.106989676s
Feb 14 12:37:55.481: INFO: Pod "downwardapi-volume-cceaa448-4f26-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.117595683s
STEP: Saw pod success
Feb 14 12:37:55.481: INFO: Pod "downwardapi-volume-cceaa448-4f26-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:37:55.486: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-cceaa448-4f26-11ea-af88-0242ac110007 container client-container: 
STEP: delete the pod
Feb 14 12:37:56.225: INFO: Waiting for pod downwardapi-volume-cceaa448-4f26-11ea-af88-0242ac110007 to disappear
Feb 14 12:37:56.627: INFO: Pod downwardapi-volume-cceaa448-4f26-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:37:56.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-p88s2" for this suite.
Feb 14 12:38:04.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:38:04.795: INFO: namespace: e2e-tests-projected-p88s2, resource: bindings, ignored listing per whitelist
Feb 14 12:38:05.081: INFO: namespace e2e-tests-projected-p88s2 deletion completed in 8.39566437s

• [SLOW TEST:19.964 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:38:05.081: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-projected-all-test-volume-d8c9fe45-4f26-11ea-af88-0242ac110007
STEP: Creating secret with name secret-projected-all-test-volume-d8c9fd95-4f26-11ea-af88-0242ac110007
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb 14 12:38:05.352: INFO: Waiting up to 5m0s for pod "projected-volume-d8c9fc99-4f26-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-b7flg" to be "success or failure"
Feb 14 12:38:05.422: INFO: Pod "projected-volume-d8c9fc99-4f26-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 69.70354ms
Feb 14 12:38:07.441: INFO: Pod "projected-volume-d8c9fc99-4f26-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088523762s
Feb 14 12:38:09.486: INFO: Pod "projected-volume-d8c9fc99-4f26-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13434152s
Feb 14 12:38:11.897: INFO: Pod "projected-volume-d8c9fc99-4f26-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.544570462s
Feb 14 12:38:13.920: INFO: Pod "projected-volume-d8c9fc99-4f26-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.567826914s
Feb 14 12:38:16.094: INFO: Pod "projected-volume-d8c9fc99-4f26-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.741953923s
STEP: Saw pod success
Feb 14 12:38:16.094: INFO: Pod "projected-volume-d8c9fc99-4f26-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:38:16.110: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-d8c9fc99-4f26-11ea-af88-0242ac110007 container projected-all-volume-test: 
STEP: delete the pod
Feb 14 12:38:16.265: INFO: Waiting for pod projected-volume-d8c9fc99-4f26-11ea-af88-0242ac110007 to disappear
Feb 14 12:38:16.295: INFO: Pod projected-volume-d8c9fc99-4f26-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:38:16.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b7flg" for this suite.
Feb 14 12:38:22.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:38:22.599: INFO: namespace: e2e-tests-projected-b7flg, resource: bindings, ignored listing per whitelist
Feb 14 12:38:22.706: INFO: namespace e2e-tests-projected-b7flg deletion completed in 6.402920046s

• [SLOW TEST:17.625 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:38:22.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-jx6b
STEP: Creating a pod to test atomic-volume-subpath
Feb 14 12:38:22.982: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-jx6b" in namespace "e2e-tests-subpath-vbd7j" to be "success or failure"
Feb 14 12:38:22.989: INFO: Pod "pod-subpath-test-downwardapi-jx6b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.466601ms
Feb 14 12:38:25.116: INFO: Pod "pod-subpath-test-downwardapi-jx6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134645204s
Feb 14 12:38:27.141: INFO: Pod "pod-subpath-test-downwardapi-jx6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159799198s
Feb 14 12:38:29.287: INFO: Pod "pod-subpath-test-downwardapi-jx6b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.305483932s
Feb 14 12:38:31.310: INFO: Pod "pod-subpath-test-downwardapi-jx6b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.32827305s
Feb 14 12:38:33.329: INFO: Pod "pod-subpath-test-downwardapi-jx6b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.347612804s
Feb 14 12:38:35.806: INFO: Pod "pod-subpath-test-downwardapi-jx6b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.824195444s
Feb 14 12:38:37.818: INFO: Pod "pod-subpath-test-downwardapi-jx6b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.835878756s
Feb 14 12:38:39.845: INFO: Pod "pod-subpath-test-downwardapi-jx6b": Phase="Running", Reason="", readiness=false. Elapsed: 16.862963114s
Feb 14 12:38:41.872: INFO: Pod "pod-subpath-test-downwardapi-jx6b": Phase="Running", Reason="", readiness=false. Elapsed: 18.890713388s
Feb 14 12:38:43.904: INFO: Pod "pod-subpath-test-downwardapi-jx6b": Phase="Running", Reason="", readiness=false. Elapsed: 20.922593045s
Feb 14 12:38:45.915: INFO: Pod "pod-subpath-test-downwardapi-jx6b": Phase="Running", Reason="", readiness=false. Elapsed: 22.933435177s
Feb 14 12:38:47.955: INFO: Pod "pod-subpath-test-downwardapi-jx6b": Phase="Running", Reason="", readiness=false. Elapsed: 24.973450474s
Feb 14 12:38:49.974: INFO: Pod "pod-subpath-test-downwardapi-jx6b": Phase="Running", Reason="", readiness=false. Elapsed: 26.992732824s
Feb 14 12:38:51.996: INFO: Pod "pod-subpath-test-downwardapi-jx6b": Phase="Running", Reason="", readiness=false. Elapsed: 29.014418386s
Feb 14 12:38:54.013: INFO: Pod "pod-subpath-test-downwardapi-jx6b": Phase="Running", Reason="", readiness=false. Elapsed: 31.031538345s
Feb 14 12:38:56.618: INFO: Pod "pod-subpath-test-downwardapi-jx6b": Phase="Running", Reason="", readiness=false. Elapsed: 33.636728252s
Feb 14 12:38:58.659: INFO: Pod "pod-subpath-test-downwardapi-jx6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.677241084s
STEP: Saw pod success
Feb 14 12:38:58.659: INFO: Pod "pod-subpath-test-downwardapi-jx6b" satisfied condition "success or failure"
Feb 14 12:38:58.677: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-jx6b container test-container-subpath-downwardapi-jx6b: 
STEP: delete the pod
Feb 14 12:38:59.116: INFO: Waiting for pod pod-subpath-test-downwardapi-jx6b to disappear
Feb 14 12:38:59.605: INFO: Pod pod-subpath-test-downwardapi-jx6b no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-jx6b
Feb 14 12:38:59.605: INFO: Deleting pod "pod-subpath-test-downwardapi-jx6b" in namespace "e2e-tests-subpath-vbd7j"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:38:59.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-vbd7j" for this suite.
Feb 14 12:39:07.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:39:07.962: INFO: namespace: e2e-tests-subpath-vbd7j, resource: bindings, ignored listing per whitelist
Feb 14 12:39:08.001: INFO: namespace e2e-tests-subpath-vbd7j deletion completed in 8.376511423s

• [SLOW TEST:45.295 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:39:08.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 14 12:39:18.905: INFO: Successfully updated pod "pod-update-fe57101d-4f26-11ea-af88-0242ac110007"
STEP: verifying the updated pod is in kubernetes
Feb 14 12:39:18.930: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:39:18.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-bg8dt" for this suite.
Feb 14 12:39:43.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:39:43.048: INFO: namespace: e2e-tests-pods-bg8dt, resource: bindings, ignored listing per whitelist
Feb 14 12:39:43.123: INFO: namespace e2e-tests-pods-bg8dt deletion completed in 24.182228964s

• [SLOW TEST:35.121 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:39:43.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 14 12:39:43.349: INFO: Waiting up to 5m0s for pod "downwardapi-volume-133e2610-4f27-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-rtsd6" to be "success or failure"
Feb 14 12:39:43.356: INFO: Pod "downwardapi-volume-133e2610-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.910879ms
Feb 14 12:39:45.802: INFO: Pod "downwardapi-volume-133e2610-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.452993618s
Feb 14 12:39:47.827: INFO: Pod "downwardapi-volume-133e2610-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.477318612s
Feb 14 12:39:50.121: INFO: Pod "downwardapi-volume-133e2610-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.77196892s
Feb 14 12:39:52.174: INFO: Pod "downwardapi-volume-133e2610-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.824294084s
Feb 14 12:39:54.196: INFO: Pod "downwardapi-volume-133e2610-4f27-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.846512426s
STEP: Saw pod success
Feb 14 12:39:54.196: INFO: Pod "downwardapi-volume-133e2610-4f27-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:39:54.207: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-133e2610-4f27-11ea-af88-0242ac110007 container client-container: 
STEP: delete the pod
Feb 14 12:39:54.447: INFO: Waiting for pod downwardapi-volume-133e2610-4f27-11ea-af88-0242ac110007 to disappear
Feb 14 12:39:54.573: INFO: Pod downwardapi-volume-133e2610-4f27-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:39:54.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rtsd6" for this suite.
Feb 14 12:40:00.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:40:00.734: INFO: namespace: e2e-tests-projected-rtsd6, resource: bindings, ignored listing per whitelist
Feb 14 12:40:00.820: INFO: namespace e2e-tests-projected-rtsd6 deletion completed in 6.239213343s

• [SLOW TEST:17.698 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:40:00.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-1dcb0114-4f27-11ea-af88-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 14 12:40:01.142: INFO: Waiting up to 5m0s for pod "pod-configmaps-1dd7c7fd-4f27-11ea-af88-0242ac110007" in namespace "e2e-tests-configmap-2mlxg" to be "success or failure"
Feb 14 12:40:01.192: INFO: Pod "pod-configmaps-1dd7c7fd-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 49.595511ms
Feb 14 12:40:03.593: INFO: Pod "pod-configmaps-1dd7c7fd-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.450351473s
Feb 14 12:40:05.605: INFO: Pod "pod-configmaps-1dd7c7fd-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.462585974s
Feb 14 12:40:07.787: INFO: Pod "pod-configmaps-1dd7c7fd-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.644362151s
Feb 14 12:40:09.801: INFO: Pod "pod-configmaps-1dd7c7fd-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.6590876s
Feb 14 12:40:11.876: INFO: Pod "pod-configmaps-1dd7c7fd-4f27-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.733952772s
STEP: Saw pod success
Feb 14 12:40:11.877: INFO: Pod "pod-configmaps-1dd7c7fd-4f27-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:40:11.896: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-1dd7c7fd-4f27-11ea-af88-0242ac110007 container configmap-volume-test: 
STEP: delete the pod
Feb 14 12:40:12.103: INFO: Waiting for pod pod-configmaps-1dd7c7fd-4f27-11ea-af88-0242ac110007 to disappear
Feb 14 12:40:12.118: INFO: Pod pod-configmaps-1dd7c7fd-4f27-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:40:12.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-2mlxg" for this suite.
Feb 14 12:40:18.233: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:40:18.345: INFO: namespace: e2e-tests-configmap-2mlxg, resource: bindings, ignored listing per whitelist
Feb 14 12:40:18.387: INFO: namespace e2e-tests-configmap-2mlxg deletion completed in 6.261255403s

• [SLOW TEST:17.565 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:40:18.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-2848705c-4f27-11ea-af88-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 14 12:40:18.661: INFO: Waiting up to 5m0s for pod "pod-configmaps-2849c7a6-4f27-11ea-af88-0242ac110007" in namespace "e2e-tests-configmap-njdh5" to be "success or failure"
Feb 14 12:40:18.682: INFO: Pod "pod-configmaps-2849c7a6-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 20.626014ms
Feb 14 12:40:20.696: INFO: Pod "pod-configmaps-2849c7a6-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035425874s
Feb 14 12:40:22.716: INFO: Pod "pod-configmaps-2849c7a6-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055203493s
Feb 14 12:40:24.738: INFO: Pod "pod-configmaps-2849c7a6-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076443018s
Feb 14 12:40:27.288: INFO: Pod "pod-configmaps-2849c7a6-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.626951859s
Feb 14 12:40:29.303: INFO: Pod "pod-configmaps-2849c7a6-4f27-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.641926553s
STEP: Saw pod success
Feb 14 12:40:29.303: INFO: Pod "pod-configmaps-2849c7a6-4f27-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:40:29.309: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-2849c7a6-4f27-11ea-af88-0242ac110007 container configmap-volume-test: 
STEP: delete the pod
Feb 14 12:40:29.376: INFO: Waiting for pod pod-configmaps-2849c7a6-4f27-11ea-af88-0242ac110007 to disappear
Feb 14 12:40:29.422: INFO: Pod pod-configmaps-2849c7a6-4f27-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:40:29.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-njdh5" for this suite.
Feb 14 12:40:35.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:40:35.653: INFO: namespace: e2e-tests-configmap-njdh5, resource: bindings, ignored listing per whitelist
Feb 14 12:40:35.660: INFO: namespace e2e-tests-configmap-njdh5 deletion completed in 6.230188347s

• [SLOW TEST:17.272 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:40:35.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 14 12:40:36.264: INFO: Waiting up to 5m0s for pod "downwardapi-volume-32b32752-4f27-11ea-af88-0242ac110007" in namespace "e2e-tests-downward-api-lnv79" to be "success or failure"
Feb 14 12:40:36.284: INFO: Pod "downwardapi-volume-32b32752-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 19.753476ms
Feb 14 12:40:38.293: INFO: Pod "downwardapi-volume-32b32752-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029020731s
Feb 14 12:40:40.310: INFO: Pod "downwardapi-volume-32b32752-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04565683s
Feb 14 12:40:42.624: INFO: Pod "downwardapi-volume-32b32752-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.359728028s
Feb 14 12:40:44.635: INFO: Pod "downwardapi-volume-32b32752-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.371352164s
Feb 14 12:40:46.866: INFO: Pod "downwardapi-volume-32b32752-4f27-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.602066883s
STEP: Saw pod success
Feb 14 12:40:46.866: INFO: Pod "downwardapi-volume-32b32752-4f27-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:40:46.887: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-32b32752-4f27-11ea-af88-0242ac110007 container client-container: 
STEP: delete the pod
Feb 14 12:40:47.367: INFO: Waiting for pod downwardapi-volume-32b32752-4f27-11ea-af88-0242ac110007 to disappear
Feb 14 12:40:47.405: INFO: Pod downwardapi-volume-32b32752-4f27-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:40:47.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-lnv79" for this suite.
Feb 14 12:40:53.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:40:53.670: INFO: namespace: e2e-tests-downward-api-lnv79, resource: bindings, ignored listing per whitelist
Feb 14 12:40:53.705: INFO: namespace e2e-tests-downward-api-lnv79 deletion completed in 6.201078195s

• [SLOW TEST:18.045 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:40:53.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-nfs8z
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-nfs8z
STEP: Deleting pre-stop pod
Feb 14 12:41:19.302: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:41:19.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-nfs8z" for this suite.
Feb 14 12:42:05.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:42:05.534: INFO: namespace: e2e-tests-prestop-nfs8z, resource: bindings, ignored listing per whitelist
Feb 14 12:42:05.614: INFO: namespace e2e-tests-prestop-nfs8z deletion completed in 46.274046452s

• [SLOW TEST:71.909 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:42:05.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 14 12:42:05.787: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-r4mbh,SelfLink:/api/v1/namespaces/e2e-tests-watch-r4mbh/configmaps/e2e-watch-test-configmap-a,UID:681d40b7-4f27-11ea-a994-fa163e34d433,ResourceVersion:21647443,Generation:0,CreationTimestamp:2020-02-14 12:42:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 14 12:42:05.787: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-r4mbh,SelfLink:/api/v1/namespaces/e2e-tests-watch-r4mbh/configmaps/e2e-watch-test-configmap-a,UID:681d40b7-4f27-11ea-a994-fa163e34d433,ResourceVersion:21647443,Generation:0,CreationTimestamp:2020-02-14 12:42:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 14 12:42:15.815: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-r4mbh,SelfLink:/api/v1/namespaces/e2e-tests-watch-r4mbh/configmaps/e2e-watch-test-configmap-a,UID:681d40b7-4f27-11ea-a994-fa163e34d433,ResourceVersion:21647456,Generation:0,CreationTimestamp:2020-02-14 12:42:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 14 12:42:15.816: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-r4mbh,SelfLink:/api/v1/namespaces/e2e-tests-watch-r4mbh/configmaps/e2e-watch-test-configmap-a,UID:681d40b7-4f27-11ea-a994-fa163e34d433,ResourceVersion:21647456,Generation:0,CreationTimestamp:2020-02-14 12:42:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 14 12:42:25.841: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-r4mbh,SelfLink:/api/v1/namespaces/e2e-tests-watch-r4mbh/configmaps/e2e-watch-test-configmap-a,UID:681d40b7-4f27-11ea-a994-fa163e34d433,ResourceVersion:21647469,Generation:0,CreationTimestamp:2020-02-14 12:42:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 14 12:42:25.842: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-r4mbh,SelfLink:/api/v1/namespaces/e2e-tests-watch-r4mbh/configmaps/e2e-watch-test-configmap-a,UID:681d40b7-4f27-11ea-a994-fa163e34d433,ResourceVersion:21647469,Generation:0,CreationTimestamp:2020-02-14 12:42:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 14 12:42:35.932: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-r4mbh,SelfLink:/api/v1/namespaces/e2e-tests-watch-r4mbh/configmaps/e2e-watch-test-configmap-a,UID:681d40b7-4f27-11ea-a994-fa163e34d433,ResourceVersion:21647482,Generation:0,CreationTimestamp:2020-02-14 12:42:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 14 12:42:35.933: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-r4mbh,SelfLink:/api/v1/namespaces/e2e-tests-watch-r4mbh/configmaps/e2e-watch-test-configmap-a,UID:681d40b7-4f27-11ea-a994-fa163e34d433,ResourceVersion:21647482,Generation:0,CreationTimestamp:2020-02-14 12:42:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 14 12:42:45.958: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-r4mbh,SelfLink:/api/v1/namespaces/e2e-tests-watch-r4mbh/configmaps/e2e-watch-test-configmap-b,UID:80155166-4f27-11ea-a994-fa163e34d433,ResourceVersion:21647494,Generation:0,CreationTimestamp:2020-02-14 12:42:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 14 12:42:45.958: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-r4mbh,SelfLink:/api/v1/namespaces/e2e-tests-watch-r4mbh/configmaps/e2e-watch-test-configmap-b,UID:80155166-4f27-11ea-a994-fa163e34d433,ResourceVersion:21647494,Generation:0,CreationTimestamp:2020-02-14 12:42:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 14 12:42:55.997: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-r4mbh,SelfLink:/api/v1/namespaces/e2e-tests-watch-r4mbh/configmaps/e2e-watch-test-configmap-b,UID:80155166-4f27-11ea-a994-fa163e34d433,ResourceVersion:21647507,Generation:0,CreationTimestamp:2020-02-14 12:42:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 14 12:42:55.997: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-r4mbh,SelfLink:/api/v1/namespaces/e2e-tests-watch-r4mbh/configmaps/e2e-watch-test-configmap-b,UID:80155166-4f27-11ea-a994-fa163e34d433,ResourceVersion:21647507,Generation:0,CreationTimestamp:2020-02-14 12:42:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:43:05.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-r4mbh" for this suite.
Feb 14 12:43:12.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:43:12.295: INFO: namespace: e2e-tests-watch-r4mbh, resource: bindings, ignored listing per whitelist
Feb 14 12:43:12.314: INFO: namespace e2e-tests-watch-r4mbh deletion completed in 6.299573335s

• [SLOW TEST:66.699 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:43:12.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Feb 14 12:43:12.602: INFO: Waiting up to 5m0s for pod "client-containers-8ff14f95-4f27-11ea-af88-0242ac110007" in namespace "e2e-tests-containers-ck9vl" to be "success or failure"
Feb 14 12:43:12.650: INFO: Pod "client-containers-8ff14f95-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 48.076027ms
Feb 14 12:43:14.667: INFO: Pod "client-containers-8ff14f95-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06551366s
Feb 14 12:43:16.727: INFO: Pod "client-containers-8ff14f95-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125403448s
Feb 14 12:43:18.739: INFO: Pod "client-containers-8ff14f95-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137804583s
Feb 14 12:43:20.841: INFO: Pod "client-containers-8ff14f95-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.23903034s
Feb 14 12:43:22.893: INFO: Pod "client-containers-8ff14f95-4f27-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.291844968s
STEP: Saw pod success
Feb 14 12:43:22.894: INFO: Pod "client-containers-8ff14f95-4f27-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:43:22.906: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-8ff14f95-4f27-11ea-af88-0242ac110007 container test-container: 
STEP: delete the pod
Feb 14 12:43:23.302: INFO: Waiting for pod client-containers-8ff14f95-4f27-11ea-af88-0242ac110007 to disappear
Feb 14 12:43:23.323: INFO: Pod client-containers-8ff14f95-4f27-11ea-af88-0242ac110007 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:43:23.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-ck9vl" for this suite.
Feb 14 12:43:33.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:43:33.501: INFO: namespace: e2e-tests-containers-ck9vl, resource: bindings, ignored listing per whitelist
Feb 14 12:43:33.606: INFO: namespace e2e-tests-containers-ck9vl deletion completed in 10.269882439s

• [SLOW TEST:21.292 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:43:33.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-sz5nf
Feb 14 12:43:50.108: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-sz5nf
STEP: checking the pod's current state and verifying that restartCount is present
Feb 14 12:43:50.113: INFO: Initial restart count of pod liveness-exec is 0
Feb 14 12:44:39.931: INFO: Restart count of pod e2e-tests-container-probe-sz5nf/liveness-exec is now 1 (49.817669756s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:44:39.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-sz5nf" for this suite.
Feb 14 12:44:48.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:44:48.071: INFO: namespace: e2e-tests-container-probe-sz5nf, resource: bindings, ignored listing per whitelist
Feb 14 12:44:48.186: INFO: namespace e2e-tests-container-probe-sz5nf deletion completed in 8.191380567s

• [SLOW TEST:74.580 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:44:48.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-c983d375-4f27-11ea-af88-0242ac110007
STEP: Creating a pod to test consume secrets
Feb 14 12:44:49.359: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c99e7eb9-4f27-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-bqnhm" to be "success or failure"
Feb 14 12:44:49.408: INFO: Pod "pod-projected-secrets-c99e7eb9-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 49.635062ms
Feb 14 12:44:51.634: INFO: Pod "pod-projected-secrets-c99e7eb9-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.275690355s
Feb 14 12:44:53.643: INFO: Pod "pod-projected-secrets-c99e7eb9-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.284726972s
Feb 14 12:44:56.031: INFO: Pod "pod-projected-secrets-c99e7eb9-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.672321415s
Feb 14 12:44:58.045: INFO: Pod "pod-projected-secrets-c99e7eb9-4f27-11ea-af88-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 8.686213434s
Feb 14 12:45:00.064: INFO: Pod "pod-projected-secrets-c99e7eb9-4f27-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.705352232s
STEP: Saw pod success
Feb 14 12:45:00.065: INFO: Pod "pod-projected-secrets-c99e7eb9-4f27-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:45:00.074: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-c99e7eb9-4f27-11ea-af88-0242ac110007 container projected-secret-volume-test: 
STEP: delete the pod
Feb 14 12:45:00.193: INFO: Waiting for pod pod-projected-secrets-c99e7eb9-4f27-11ea-af88-0242ac110007 to disappear
Feb 14 12:45:00.210: INFO: Pod pod-projected-secrets-c99e7eb9-4f27-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:45:00.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bqnhm" for this suite.
Feb 14 12:45:06.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:45:06.716: INFO: namespace: e2e-tests-projected-bqnhm, resource: bindings, ignored listing per whitelist
Feb 14 12:45:06.734: INFO: namespace e2e-tests-projected-bqnhm deletion completed in 6.515883203s

• [SLOW TEST:18.547 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:45:06.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:45:17.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-kxjkh" for this suite.
Feb 14 12:45:23.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:45:23.345: INFO: namespace: e2e-tests-emptydir-wrapper-kxjkh, resource: bindings, ignored listing per whitelist
Feb 14 12:45:23.406: INFO: namespace e2e-tests-emptydir-wrapper-kxjkh deletion completed in 6.189075962s

• [SLOW TEST:16.671 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:45:23.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0214 12:45:39.936253       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 14 12:45:39.936: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:45:39.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-vts8n" for this suite.
Feb 14 12:46:01.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:46:01.267: INFO: namespace: e2e-tests-gc-vts8n, resource: bindings, ignored listing per whitelist
Feb 14 12:46:01.364: INFO: namespace e2e-tests-gc-vts8n deletion completed in 21.39087824s

• [SLOW TEST:37.958 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:46:01.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 14 12:46:04.699: INFO: Waiting up to 5m0s for pod "pod-f6892dce-4f27-11ea-af88-0242ac110007" in namespace "e2e-tests-emptydir-mlx8n" to be "success or failure"
Feb 14 12:46:04.732: INFO: Pod "pod-f6892dce-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 33.461503ms
Feb 14 12:46:07.326: INFO: Pod "pod-f6892dce-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.626648655s
Feb 14 12:46:09.337: INFO: Pod "pod-f6892dce-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.637978619s
Feb 14 12:46:11.344: INFO: Pod "pod-f6892dce-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.645238643s
Feb 14 12:46:13.358: INFO: Pod "pod-f6892dce-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.658613327s
Feb 14 12:46:15.369: INFO: Pod "pod-f6892dce-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.669857334s
Feb 14 12:46:18.167: INFO: Pod "pod-f6892dce-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 13.468586936s
Feb 14 12:46:20.182: INFO: Pod "pod-f6892dce-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.483025965s
Feb 14 12:46:22.196: INFO: Pod "pod-f6892dce-4f27-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 17.497318699s
Feb 14 12:46:24.210: INFO: Pod "pod-f6892dce-4f27-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.511104471s
STEP: Saw pod success
Feb 14 12:46:24.210: INFO: Pod "pod-f6892dce-4f27-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:46:24.214: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f6892dce-4f27-11ea-af88-0242ac110007 container test-container: 
STEP: delete the pod
Feb 14 12:46:25.175: INFO: Waiting for pod pod-f6892dce-4f27-11ea-af88-0242ac110007 to disappear
Feb 14 12:46:25.209: INFO: Pod pod-f6892dce-4f27-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:46:25.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-mlx8n" for this suite.
Feb 14 12:46:33.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:46:33.552: INFO: namespace: e2e-tests-emptydir-mlx8n, resource: bindings, ignored listing per whitelist
Feb 14 12:46:33.657: INFO: namespace e2e-tests-emptydir-mlx8n deletion completed in 8.352189951s

• [SLOW TEST:32.293 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:46:33.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-08058ea3-4f28-11ea-af88-0242ac110007
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:46:46.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-sk2cn" for this suite.
Feb 14 12:47:10.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:47:10.292: INFO: namespace: e2e-tests-configmap-sk2cn, resource: bindings, ignored listing per whitelist
Feb 14 12:47:10.558: INFO: namespace e2e-tests-configmap-sk2cn deletion completed in 24.370392307s

• [SLOW TEST:36.900 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:47:10.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-7hl5h
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 14 12:47:10.763: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 14 12:47:51.181: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-7hl5h PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 12:47:51.181: INFO: >>> kubeConfig: /root/.kube/config
I0214 12:47:51.284967       8 log.go:172] (0xc001a94420) (0xc0023d8780) Create stream
I0214 12:47:51.285117       8 log.go:172] (0xc001a94420) (0xc0023d8780) Stream added, broadcasting: 1
I0214 12:47:51.296877       8 log.go:172] (0xc001a94420) Reply frame received for 1
I0214 12:47:51.297010       8 log.go:172] (0xc001a94420) (0xc0023d88c0) Create stream
I0214 12:47:51.297029       8 log.go:172] (0xc001a94420) (0xc0023d88c0) Stream added, broadcasting: 3
I0214 12:47:51.298237       8 log.go:172] (0xc001a94420) Reply frame received for 3
I0214 12:47:51.298273       8 log.go:172] (0xc001a94420) (0xc0025460a0) Create stream
I0214 12:47:51.298282       8 log.go:172] (0xc001a94420) (0xc0025460a0) Stream added, broadcasting: 5
I0214 12:47:51.299999       8 log.go:172] (0xc001a94420) Reply frame received for 5
I0214 12:47:51.547000       8 log.go:172] (0xc001a94420) Data frame received for 3
I0214 12:47:51.547114       8 log.go:172] (0xc0023d88c0) (3) Data frame handling
I0214 12:47:51.547136       8 log.go:172] (0xc0023d88c0) (3) Data frame sent
I0214 12:47:51.667703       8 log.go:172] (0xc001a94420) Data frame received for 1
I0214 12:47:51.667830       8 log.go:172] (0xc0023d8780) (1) Data frame handling
I0214 12:47:51.667878       8 log.go:172] (0xc0023d8780) (1) Data frame sent
I0214 12:47:51.668082       8 log.go:172] (0xc001a94420) (0xc0023d8780) Stream removed, broadcasting: 1
I0214 12:47:51.668716       8 log.go:172] (0xc001a94420) (0xc0023d88c0) Stream removed, broadcasting: 3
I0214 12:47:51.669875       8 log.go:172] (0xc001a94420) (0xc0025460a0) Stream removed, broadcasting: 5
I0214 12:47:51.670024       8 log.go:172] (0xc001a94420) (0xc0023d8780) Stream removed, broadcasting: 1
I0214 12:47:51.670092       8 log.go:172] (0xc001a94420) (0xc0023d88c0) Stream removed, broadcasting: 3
I0214 12:47:51.670117       8 log.go:172] (0xc001a94420) (0xc0025460a0) Stream removed, broadcasting: 5
Feb 14 12:47:51.670: INFO: Waiting for endpoints: map[]
I0214 12:47:51.671292       8 log.go:172] (0xc001a94420) Go away received
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:47:51.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-7hl5h" for this suite.
Feb 14 12:48:19.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:48:19.909: INFO: namespace: e2e-tests-pod-network-test-7hl5h, resource: bindings, ignored listing per whitelist
Feb 14 12:48:19.919: INFO: namespace e2e-tests-pod-network-test-7hl5h deletion completed in 28.20794182s

• [SLOW TEST:69.360 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:48:19.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Feb 14 12:48:20.169: INFO: Waiting up to 5m0s for pod "downward-api-4749dbc1-4f28-11ea-af88-0242ac110007" in namespace "e2e-tests-downward-api-tlvvq" to be "success or failure"
Feb 14 12:48:20.449: INFO: Pod "downward-api-4749dbc1-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 280.209812ms
Feb 14 12:48:22.492: INFO: Pod "downward-api-4749dbc1-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.323001066s
Feb 14 12:48:24.523: INFO: Pod "downward-api-4749dbc1-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.354204165s
Feb 14 12:48:26.590: INFO: Pod "downward-api-4749dbc1-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.420808102s
Feb 14 12:48:28.607: INFO: Pod "downward-api-4749dbc1-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.437379557s
Feb 14 12:48:30.647: INFO: Pod "downward-api-4749dbc1-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.477561724s
Feb 14 12:48:32.670: INFO: Pod "downward-api-4749dbc1-4f28-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.501042141s
STEP: Saw pod success
Feb 14 12:48:32.670: INFO: Pod "downward-api-4749dbc1-4f28-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:48:32.677: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-4749dbc1-4f28-11ea-af88-0242ac110007 container dapi-container: 
STEP: delete the pod
Feb 14 12:48:32.798: INFO: Waiting for pod downward-api-4749dbc1-4f28-11ea-af88-0242ac110007 to disappear
Feb 14 12:48:32.804: INFO: Pod downward-api-4749dbc1-4f28-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:48:32.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-tlvvq" for this suite.
Feb 14 12:48:38.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:48:38.996: INFO: namespace: e2e-tests-downward-api-tlvvq, resource: bindings, ignored listing per whitelist
Feb 14 12:48:39.099: INFO: namespace e2e-tests-downward-api-tlvvq deletion completed in 6.287803272s

• [SLOW TEST:19.179 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:48:39.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Feb 14 12:48:39.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 14 12:48:41.886: INFO: stderr: ""
Feb 14 12:48:41.887: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:48:41.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qphth" for this suite.
Feb 14 12:48:47.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:48:48.077: INFO: namespace: e2e-tests-kubectl-qphth, resource: bindings, ignored listing per whitelist
Feb 14 12:48:48.288: INFO: namespace e2e-tests-kubectl-qphth deletion completed in 6.391285687s

• [SLOW TEST:9.189 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:48:48.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 14 12:48:48.430: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:48:49.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-pkmpn" for this suite.
Feb 14 12:48:55.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:48:55.953: INFO: namespace: e2e-tests-custom-resource-definition-pkmpn, resource: bindings, ignored listing per whitelist
Feb 14 12:48:56.066: INFO: namespace e2e-tests-custom-resource-definition-pkmpn deletion completed in 6.342159715s

• [SLOW TEST:7.777 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:48:56.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 14 12:48:56.375: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5ccc518d-4f28-11ea-af88-0242ac110007" in namespace "e2e-tests-downward-api-b24dz" to be "success or failure"
Feb 14 12:48:56.391: INFO: Pod "downwardapi-volume-5ccc518d-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.717216ms
Feb 14 12:49:00.915: INFO: Pod "downwardapi-volume-5ccc518d-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.539408263s
Feb 14 12:49:02.935: INFO: Pod "downwardapi-volume-5ccc518d-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.559853477s
Feb 14 12:49:04.968: INFO: Pod "downwardapi-volume-5ccc518d-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.593021904s
Feb 14 12:49:06.987: INFO: Pod "downwardapi-volume-5ccc518d-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.611619485s
Feb 14 12:49:09.007: INFO: Pod "downwardapi-volume-5ccc518d-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 12.632016297s
Feb 14 12:49:11.026: INFO: Pod "downwardapi-volume-5ccc518d-4f28-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.650927281s
STEP: Saw pod success
Feb 14 12:49:11.026: INFO: Pod "downwardapi-volume-5ccc518d-4f28-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:49:11.039: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5ccc518d-4f28-11ea-af88-0242ac110007 container client-container: 
STEP: delete the pod
Feb 14 12:49:11.259: INFO: Waiting for pod downwardapi-volume-5ccc518d-4f28-11ea-af88-0242ac110007 to disappear
Feb 14 12:49:11.270: INFO: Pod downwardapi-volume-5ccc518d-4f28-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:49:11.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-b24dz" for this suite.
Feb 14 12:49:17.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:49:17.387: INFO: namespace: e2e-tests-downward-api-b24dz, resource: bindings, ignored listing per whitelist
Feb 14 12:49:17.501: INFO: namespace e2e-tests-downward-api-b24dz deletion completed in 6.213768076s

• [SLOW TEST:21.435 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:49:17.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-6b194a9c-4f28-11ea-af88-0242ac110007
STEP: Creating a pod to test consume secrets
Feb 14 12:49:20.387: INFO: Waiting up to 5m0s for pod "pod-secrets-6b2f6e9f-4f28-11ea-af88-0242ac110007" in namespace "e2e-tests-secrets-gp7s6" to be "success or failure"
Feb 14 12:49:20.397: INFO: Pod "pod-secrets-6b2f6e9f-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.434754ms
Feb 14 12:49:22.758: INFO: Pod "pod-secrets-6b2f6e9f-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.371006349s
Feb 14 12:49:24.789: INFO: Pod "pod-secrets-6b2f6e9f-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.401928661s
Feb 14 12:49:26.861: INFO: Pod "pod-secrets-6b2f6e9f-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.473501875s
Feb 14 12:49:28.912: INFO: Pod "pod-secrets-6b2f6e9f-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.524567885s
Feb 14 12:49:30.925: INFO: Pod "pod-secrets-6b2f6e9f-4f28-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.537591982s
STEP: Saw pod success
Feb 14 12:49:30.925: INFO: Pod "pod-secrets-6b2f6e9f-4f28-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:49:30.929: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-6b2f6e9f-4f28-11ea-af88-0242ac110007 container secret-volume-test: 
STEP: delete the pod
Feb 14 12:49:31.755: INFO: Waiting for pod pod-secrets-6b2f6e9f-4f28-11ea-af88-0242ac110007 to disappear
Feb 14 12:49:32.083: INFO: Pod pod-secrets-6b2f6e9f-4f28-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:49:32.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-gp7s6" for this suite.
Feb 14 12:49:38.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:49:38.290: INFO: namespace: e2e-tests-secrets-gp7s6, resource: bindings, ignored listing per whitelist
Feb 14 12:49:38.369: INFO: namespace e2e-tests-secrets-gp7s6 deletion completed in 6.273703418s
STEP: Destroying namespace "e2e-tests-secret-namespace-5rzp4" for this suite.
Feb 14 12:49:44.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:49:44.501: INFO: namespace: e2e-tests-secret-namespace-5rzp4, resource: bindings, ignored listing per whitelist
Feb 14 12:49:44.582: INFO: namespace e2e-tests-secret-namespace-5rzp4 deletion completed in 6.213211225s

• [SLOW TEST:27.081 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:49:44.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 14 12:49:44.823: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 14 12:49:44.903: INFO: Number of nodes with available pods: 0
Feb 14 12:49:44.903: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 14 12:49:44.988: INFO: Number of nodes with available pods: 0
Feb 14 12:49:44.988: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:49:45.999: INFO: Number of nodes with available pods: 0
Feb 14 12:49:46.000: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:49:47.005: INFO: Number of nodes with available pods: 0
Feb 14 12:49:47.005: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:49:48.016: INFO: Number of nodes with available pods: 0
Feb 14 12:49:48.016: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:49:49.003: INFO: Number of nodes with available pods: 0
Feb 14 12:49:49.004: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:49:50.001: INFO: Number of nodes with available pods: 0
Feb 14 12:49:50.001: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:49:51.007: INFO: Number of nodes with available pods: 0
Feb 14 12:49:51.007: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:49:52.003: INFO: Number of nodes with available pods: 0
Feb 14 12:49:52.003: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:49:53.013: INFO: Number of nodes with available pods: 0
Feb 14 12:49:53.013: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:49:54.002: INFO: Number of nodes with available pods: 0
Feb 14 12:49:54.002: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:49:55.022: INFO: Number of nodes with available pods: 1
Feb 14 12:49:55.022: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 14 12:49:55.266: INFO: Number of nodes with available pods: 1
Feb 14 12:49:55.266: INFO: Number of running nodes: 0, number of available pods: 1
Feb 14 12:49:56.290: INFO: Number of nodes with available pods: 0
Feb 14 12:49:56.290: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 14 12:49:56.370: INFO: Number of nodes with available pods: 0
Feb 14 12:49:56.371: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:49:57.404: INFO: Number of nodes with available pods: 0
Feb 14 12:49:57.404: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:49:58.538: INFO: Number of nodes with available pods: 0
Feb 14 12:49:58.538: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:49:59.398: INFO: Number of nodes with available pods: 0
Feb 14 12:49:59.398: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:00.411: INFO: Number of nodes with available pods: 0
Feb 14 12:50:00.411: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:01.388: INFO: Number of nodes with available pods: 0
Feb 14 12:50:01.388: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:02.417: INFO: Number of nodes with available pods: 0
Feb 14 12:50:02.417: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:03.386: INFO: Number of nodes with available pods: 0
Feb 14 12:50:03.386: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:04.391: INFO: Number of nodes with available pods: 0
Feb 14 12:50:04.391: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:05.386: INFO: Number of nodes with available pods: 0
Feb 14 12:50:05.387: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:06.386: INFO: Number of nodes with available pods: 0
Feb 14 12:50:06.386: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:07.386: INFO: Number of nodes with available pods: 0
Feb 14 12:50:07.386: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:08.394: INFO: Number of nodes with available pods: 0
Feb 14 12:50:08.394: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:09.389: INFO: Number of nodes with available pods: 0
Feb 14 12:50:09.389: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:10.388: INFO: Number of nodes with available pods: 0
Feb 14 12:50:10.388: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:11.387: INFO: Number of nodes with available pods: 0
Feb 14 12:50:11.387: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:12.387: INFO: Number of nodes with available pods: 0
Feb 14 12:50:12.388: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:13.877: INFO: Number of nodes with available pods: 0
Feb 14 12:50:13.877: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:14.381: INFO: Number of nodes with available pods: 0
Feb 14 12:50:14.381: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:15.384: INFO: Number of nodes with available pods: 0
Feb 14 12:50:15.384: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:16.398: INFO: Number of nodes with available pods: 0
Feb 14 12:50:16.398: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:17.387: INFO: Number of nodes with available pods: 0
Feb 14 12:50:17.387: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:18.915: INFO: Number of nodes with available pods: 0
Feb 14 12:50:18.916: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:19.391: INFO: Number of nodes with available pods: 0
Feb 14 12:50:19.391: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:20.390: INFO: Number of nodes with available pods: 0
Feb 14 12:50:20.390: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:21.386: INFO: Number of nodes with available pods: 0
Feb 14 12:50:21.387: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:22.391: INFO: Number of nodes with available pods: 0
Feb 14 12:50:22.392: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:50:23.383: INFO: Number of nodes with available pods: 1
Feb 14 12:50:23.383: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-nq9b8, will wait for the garbage collector to delete the pods
Feb 14 12:50:23.464: INFO: Deleting DaemonSet.extensions daemon-set took: 12.768855ms
Feb 14 12:50:23.564: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.473623ms
Feb 14 12:50:32.677: INFO: Number of nodes with available pods: 0
Feb 14 12:50:32.677: INFO: Number of running nodes: 0, number of available pods: 0
Feb 14 12:50:32.681: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-nq9b8/daemonsets","resourceVersion":"21648526"},"items":null}

Feb 14 12:50:32.685: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-nq9b8/pods","resourceVersion":"21648526"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:50:33.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-nq9b8" for this suite.
Feb 14 12:50:39.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:50:39.436: INFO: namespace: e2e-tests-daemonsets-nq9b8, resource: bindings, ignored listing per whitelist
Feb 14 12:50:39.582: INFO: namespace e2e-tests-daemonsets-nq9b8 deletion completed in 6.480023663s

• [SLOW TEST:55.000 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:50:39.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-9a7cdb82-4f28-11ea-af88-0242ac110007
STEP: Creating a pod to test consume configMaps
Feb 14 12:50:39.883: INFO: Waiting up to 5m0s for pod "pod-configmaps-9a8d5563-4f28-11ea-af88-0242ac110007" in namespace "e2e-tests-configmap-xtjhj" to be "success or failure"
Feb 14 12:50:39.910: INFO: Pod "pod-configmaps-9a8d5563-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 26.956735ms
Feb 14 12:50:42.179: INFO: Pod "pod-configmaps-9a8d5563-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296356741s
Feb 14 12:50:44.193: INFO: Pod "pod-configmaps-9a8d5563-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30986441s
Feb 14 12:50:46.227: INFO: Pod "pod-configmaps-9a8d5563-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.344264049s
Feb 14 12:50:48.311: INFO: Pod "pod-configmaps-9a8d5563-4f28-11ea-af88-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 8.427598757s
Feb 14 12:50:50.330: INFO: Pod "pod-configmaps-9a8d5563-4f28-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.446505082s
STEP: Saw pod success
Feb 14 12:50:50.330: INFO: Pod "pod-configmaps-9a8d5563-4f28-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:50:50.333: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-9a8d5563-4f28-11ea-af88-0242ac110007 container configmap-volume-test: 
STEP: delete the pod
Feb 14 12:50:50.638: INFO: Waiting for pod pod-configmaps-9a8d5563-4f28-11ea-af88-0242ac110007 to disappear
Feb 14 12:50:50.664: INFO: Pod pod-configmaps-9a8d5563-4f28-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:50:50.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-xtjhj" for this suite.
Feb 14 12:50:57.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:50:57.487: INFO: namespace: e2e-tests-configmap-xtjhj, resource: bindings, ignored listing per whitelist
Feb 14 12:50:57.493: INFO: namespace e2e-tests-configmap-xtjhj deletion completed in 6.815109259s

• [SLOW TEST:17.910 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:50:57.493: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Feb 14 12:50:57.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ldgsb'
Feb 14 12:50:58.287: INFO: stderr: ""
Feb 14 12:50:58.288: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 14 12:50:58.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ldgsb'
Feb 14 12:50:58.495: INFO: stderr: ""
Feb 14 12:50:58.495: INFO: stdout: "update-demo-nautilus-79b5s update-demo-nautilus-ftwjq "
Feb 14 12:50:58.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-79b5s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ldgsb'
Feb 14 12:50:58.738: INFO: stderr: ""
Feb 14 12:50:58.739: INFO: stdout: ""
Feb 14 12:50:58.739: INFO: update-demo-nautilus-79b5s is created but not running
Feb 14 12:51:03.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ldgsb'
Feb 14 12:51:03.907: INFO: stderr: ""
Feb 14 12:51:03.907: INFO: stdout: "update-demo-nautilus-79b5s update-demo-nautilus-ftwjq "
Feb 14 12:51:03.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-79b5s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ldgsb'
Feb 14 12:51:04.083: INFO: stderr: ""
Feb 14 12:51:04.083: INFO: stdout: ""
Feb 14 12:51:04.083: INFO: update-demo-nautilus-79b5s is created but not running
Feb 14 12:51:09.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ldgsb'
Feb 14 12:51:09.255: INFO: stderr: ""
Feb 14 12:51:09.255: INFO: stdout: "update-demo-nautilus-79b5s update-demo-nautilus-ftwjq "
Feb 14 12:51:09.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-79b5s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ldgsb'
Feb 14 12:51:09.413: INFO: stderr: ""
Feb 14 12:51:09.414: INFO: stdout: ""
Feb 14 12:51:09.414: INFO: update-demo-nautilus-79b5s is created but not running
Feb 14 12:51:14.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ldgsb'
Feb 14 12:51:14.671: INFO: stderr: ""
Feb 14 12:51:14.672: INFO: stdout: "update-demo-nautilus-79b5s update-demo-nautilus-ftwjq "
Feb 14 12:51:14.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-79b5s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ldgsb'
Feb 14 12:51:14.803: INFO: stderr: ""
Feb 14 12:51:14.804: INFO: stdout: "true"
Feb 14 12:51:14.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-79b5s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ldgsb'
Feb 14 12:51:14.972: INFO: stderr: ""
Feb 14 12:51:14.972: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 12:51:14.972: INFO: validating pod update-demo-nautilus-79b5s
Feb 14 12:51:15.043: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 12:51:15.043: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 12:51:15.043: INFO: update-demo-nautilus-79b5s is verified up and running
Feb 14 12:51:15.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftwjq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ldgsb'
Feb 14 12:51:15.255: INFO: stderr: ""
Feb 14 12:51:15.255: INFO: stdout: "true"
Feb 14 12:51:15.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftwjq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ldgsb'
Feb 14 12:51:15.372: INFO: stderr: ""
Feb 14 12:51:15.372: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 12:51:15.372: INFO: validating pod update-demo-nautilus-ftwjq
Feb 14 12:51:15.381: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 12:51:15.381: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 12:51:15.381: INFO: update-demo-nautilus-ftwjq is verified up and running
STEP: using delete to clean up resources
Feb 14 12:51:15.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ldgsb'
Feb 14 12:51:15.582: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 12:51:15.583: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 14 12:51:15.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-ldgsb'
Feb 14 12:51:15.728: INFO: stderr: "No resources found.\n"
Feb 14 12:51:15.728: INFO: stdout: ""
Feb 14 12:51:15.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-ldgsb -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 14 12:51:15.925: INFO: stderr: ""
Feb 14 12:51:15.925: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:51:15.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-ldgsb" for this suite.
Feb 14 12:51:40.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:51:40.218: INFO: namespace: e2e-tests-kubectl-ldgsb, resource: bindings, ignored listing per whitelist
Feb 14 12:51:40.269: INFO: namespace e2e-tests-kubectl-ldgsb deletion completed in 24.32056786s

• [SLOW TEST:42.776 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:51:40.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 14 12:51:40.690: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 14 12:51:40.757: INFO: Number of nodes with available pods: 0
Feb 14 12:51:40.757: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:51:42.420: INFO: Number of nodes with available pods: 0
Feb 14 12:51:42.420: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:51:42.780: INFO: Number of nodes with available pods: 0
Feb 14 12:51:42.780: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:51:43.776: INFO: Number of nodes with available pods: 0
Feb 14 12:51:43.776: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:51:44.841: INFO: Number of nodes with available pods: 0
Feb 14 12:51:44.842: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:51:45.820: INFO: Number of nodes with available pods: 0
Feb 14 12:51:45.821: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:51:47.617: INFO: Number of nodes with available pods: 0
Feb 14 12:51:47.618: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:51:47.796: INFO: Number of nodes with available pods: 0
Feb 14 12:51:47.796: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:51:48.823: INFO: Number of nodes with available pods: 0
Feb 14 12:51:48.823: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:51:49.781: INFO: Number of nodes with available pods: 0
Feb 14 12:51:49.781: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:51:50.777: INFO: Number of nodes with available pods: 1
Feb 14 12:51:50.777: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 14 12:51:50.809: INFO: Wrong image for pod: daemon-set-bvgsw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 12:51:51.847: INFO: Wrong image for pod: daemon-set-bvgsw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 12:51:52.830: INFO: Wrong image for pod: daemon-set-bvgsw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 12:51:54.622: INFO: Wrong image for pod: daemon-set-bvgsw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 12:51:54.913: INFO: Wrong image for pod: daemon-set-bvgsw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 12:51:55.834: INFO: Wrong image for pod: daemon-set-bvgsw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 12:51:56.871: INFO: Wrong image for pod: daemon-set-bvgsw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 12:51:57.826: INFO: Wrong image for pod: daemon-set-bvgsw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 12:51:58.832: INFO: Wrong image for pod: daemon-set-bvgsw. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 14 12:51:58.832: INFO: Pod daemon-set-bvgsw is not available
Feb 14 12:51:59.830: INFO: Pod daemon-set-62kwx is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 14 12:51:59.862: INFO: Number of nodes with available pods: 0
Feb 14 12:51:59.862: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:52:01.296: INFO: Number of nodes with available pods: 0
Feb 14 12:52:01.296: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:52:01.917: INFO: Number of nodes with available pods: 0
Feb 14 12:52:01.917: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:52:02.960: INFO: Number of nodes with available pods: 0
Feb 14 12:52:02.960: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:52:03.928: INFO: Number of nodes with available pods: 0
Feb 14 12:52:03.928: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:52:07.150: INFO: Number of nodes with available pods: 0
Feb 14 12:52:07.150: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:52:08.018: INFO: Number of nodes with available pods: 0
Feb 14 12:52:08.019: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:52:08.897: INFO: Number of nodes with available pods: 0
Feb 14 12:52:08.897: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:52:09.893: INFO: Number of nodes with available pods: 0
Feb 14 12:52:09.893: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Feb 14 12:52:10.902: INFO: Number of nodes with available pods: 1
Feb 14 12:52:10.902: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-59x2v, will wait for the garbage collector to delete the pods
Feb 14 12:52:11.191: INFO: Deleting DaemonSet.extensions daemon-set took: 48.599949ms
Feb 14 12:52:11.291: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.471268ms
Feb 14 12:52:22.844: INFO: Number of nodes with available pods: 0
Feb 14 12:52:22.844: INFO: Number of running nodes: 0, number of available pods: 0
Feb 14 12:52:22.857: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-59x2v/daemonsets","resourceVersion":"21648798"},"items":null}

Feb 14 12:52:22.869: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-59x2v/pods","resourceVersion":"21648798"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:52:22.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-59x2v" for this suite.
Feb 14 12:52:28.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:52:29.065: INFO: namespace: e2e-tests-daemonsets-59x2v, resource: bindings, ignored listing per whitelist
Feb 14 12:52:29.120: INFO: namespace e2e-tests-daemonsets-59x2v deletion completed in 6.229866405s

• [SLOW TEST:48.850 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:52:29.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-dbddd91c-4f28-11ea-af88-0242ac110007
STEP: Creating a pod to test consume secrets
Feb 14 12:52:29.460: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dbe0593e-4f28-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-zlccl" to be "success or failure"
Feb 14 12:52:29.515: INFO: Pod "pod-projected-secrets-dbe0593e-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 54.792102ms
Feb 14 12:52:32.663: INFO: Pod "pod-projected-secrets-dbe0593e-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 3.203682964s
Feb 14 12:52:34.686: INFO: Pod "pod-projected-secrets-dbe0593e-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 5.225956974s
Feb 14 12:52:37.658: INFO: Pod "pod-projected-secrets-dbe0593e-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.198149089s
Feb 14 12:52:39.678: INFO: Pod "pod-projected-secrets-dbe0593e-4f28-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.217774744s
Feb 14 12:52:41.695: INFO: Pod "pod-projected-secrets-dbe0593e-4f28-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.2353991s
STEP: Saw pod success
Feb 14 12:52:41.695: INFO: Pod "pod-projected-secrets-dbe0593e-4f28-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:52:41.703: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-dbe0593e-4f28-11ea-af88-0242ac110007 container secret-volume-test: 
STEP: delete the pod
Feb 14 12:52:42.517: INFO: Waiting for pod pod-projected-secrets-dbe0593e-4f28-11ea-af88-0242ac110007 to disappear
Feb 14 12:52:42.549: INFO: Pod pod-projected-secrets-dbe0593e-4f28-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:52:42.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zlccl" for this suite.
Feb 14 12:52:48.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:52:48.916: INFO: namespace: e2e-tests-projected-zlccl, resource: bindings, ignored listing per whitelist
Feb 14 12:52:49.063: INFO: namespace e2e-tests-projected-zlccl deletion completed in 6.436405498s

• [SLOW TEST:19.942 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:52:49.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Feb 14 12:52:59.315: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-e7a9de02-4f28-11ea-af88-0242ac110007", GenerateName:"", Namespace:"e2e-tests-pods-kth64", SelfLink:"/api/v1/namespaces/e2e-tests-pods-kth64/pods/pod-submit-remove-e7a9de02-4f28-11ea-af88-0242ac110007", UID:"e7b04c32-4f28-11ea-a994-fa163e34d433", ResourceVersion:"21648892", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717281569, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"217720635", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-b7p86", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0025f2d00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-b7p86", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001f77028), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002053620), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f77060)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f77080)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001f77088), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001f7708c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717281569, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717281577, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717281577, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717281569, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc00221f1a0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00221f1e0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://6a6d333ea8ca605f313b479eeba5f0f72eb6771558b3880ad75fd06652ff26e2"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:53:12.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-kth64" for this suite.
Feb 14 12:53:18.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:53:18.959: INFO: namespace: e2e-tests-pods-kth64, resource: bindings, ignored listing per whitelist
Feb 14 12:53:19.198: INFO: namespace e2e-tests-pods-kth64 deletion completed in 6.384310866s

• [SLOW TEST:30.135 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:53:19.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-f9a38a98-4f28-11ea-af88-0242ac110007
Feb 14 12:53:19.408: INFO: Pod name my-hostname-basic-f9a38a98-4f28-11ea-af88-0242ac110007: Found 0 pods out of 1
Feb 14 12:53:24.422: INFO: Pod name my-hostname-basic-f9a38a98-4f28-11ea-af88-0242ac110007: Found 1 pods out of 1
Feb 14 12:53:24.423: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-f9a38a98-4f28-11ea-af88-0242ac110007" are running
Feb 14 12:53:30.454: INFO: Pod "my-hostname-basic-f9a38a98-4f28-11ea-af88-0242ac110007-t9bdc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 12:53:19 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 12:53:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f9a38a98-4f28-11ea-af88-0242ac110007]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 12:53:19 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f9a38a98-4f28-11ea-af88-0242ac110007]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 12:53:19 +0000 UTC Reason: Message:}])
Feb 14 12:53:30.454: INFO: Trying to dial the pod
Feb 14 12:53:35.540: INFO: Controller my-hostname-basic-f9a38a98-4f28-11ea-af88-0242ac110007: Got expected result from replica 1 [my-hostname-basic-f9a38a98-4f28-11ea-af88-0242ac110007-t9bdc]: "my-hostname-basic-f9a38a98-4f28-11ea-af88-0242ac110007-t9bdc", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:53:35.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-6w9hz" for this suite.
Feb 14 12:53:41.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:53:41.621: INFO: namespace: e2e-tests-replication-controller-6w9hz, resource: bindings, ignored listing per whitelist
Feb 14 12:53:41.910: INFO: namespace e2e-tests-replication-controller-6w9hz deletion completed in 6.35736987s

• [SLOW TEST:22.711 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:53:41.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 14 12:53:42.148: INFO: Waiting up to 5m0s for pod "pod-07328eef-4f29-11ea-af88-0242ac110007" in namespace "e2e-tests-emptydir-h85rk" to be "success or failure"
Feb 14 12:53:42.155: INFO: Pod "pod-07328eef-4f29-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.450979ms
Feb 14 12:53:44.555: INFO: Pod "pod-07328eef-4f29-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.406611264s
Feb 14 12:53:46.569: INFO: Pod "pod-07328eef-4f29-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.420894796s
Feb 14 12:53:48.602: INFO: Pod "pod-07328eef-4f29-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.453269114s
Feb 14 12:53:50.631: INFO: Pod "pod-07328eef-4f29-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.482911876s
Feb 14 12:53:52.719: INFO: Pod "pod-07328eef-4f29-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.57046392s
Feb 14 12:53:54.796: INFO: Pod "pod-07328eef-4f29-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.647254589s
STEP: Saw pod success
Feb 14 12:53:54.796: INFO: Pod "pod-07328eef-4f29-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:53:54.805: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-07328eef-4f29-11ea-af88-0242ac110007 container test-container: 
STEP: delete the pod
Feb 14 12:53:55.300: INFO: Waiting for pod pod-07328eef-4f29-11ea-af88-0242ac110007 to disappear
Feb 14 12:53:55.345: INFO: Pod pod-07328eef-4f29-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:53:55.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-h85rk" for this suite.
Feb 14 12:54:01.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:54:01.485: INFO: namespace: e2e-tests-emptydir-h85rk, resource: bindings, ignored listing per whitelist
Feb 14 12:54:01.629: INFO: namespace e2e-tests-emptydir-h85rk deletion completed in 6.273462704s

• [SLOW TEST:19.718 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:54:01.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-g95n2
Feb 14 12:54:12.942: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-g95n2
STEP: checking the pod's current state and verifying that restartCount is present
Feb 14 12:54:12.984: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:58:13.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-g95n2" for this suite.
Feb 14 12:58:19.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:58:19.679: INFO: namespace: e2e-tests-container-probe-g95n2, resource: bindings, ignored listing per whitelist
Feb 14 12:58:19.690: INFO: namespace e2e-tests-container-probe-g95n2 deletion completed in 6.193958218s

• [SLOW TEST:258.061 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:58:19.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 14 12:58:19.966: INFO: Waiting up to 5m0s for pod "downwardapi-volume-acbae93b-4f29-11ea-af88-0242ac110007" in namespace "e2e-tests-downward-api-sz5g9" to be "success or failure"
Feb 14 12:58:19.981: INFO: Pod "downwardapi-volume-acbae93b-4f29-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.005412ms
Feb 14 12:58:22.001: INFO: Pod "downwardapi-volume-acbae93b-4f29-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034742798s
Feb 14 12:58:24.028: INFO: Pod "downwardapi-volume-acbae93b-4f29-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061955374s
Feb 14 12:58:26.713: INFO: Pod "downwardapi-volume-acbae93b-4f29-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.746894082s
Feb 14 12:58:28.726: INFO: Pod "downwardapi-volume-acbae93b-4f29-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.760393176s
Feb 14 12:58:30.757: INFO: Pod "downwardapi-volume-acbae93b-4f29-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.791206915s
Feb 14 12:58:32.789: INFO: Pod "downwardapi-volume-acbae93b-4f29-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.82299414s
STEP: Saw pod success
Feb 14 12:58:32.789: INFO: Pod "downwardapi-volume-acbae93b-4f29-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 12:58:32.797: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-acbae93b-4f29-11ea-af88-0242ac110007 container client-container: 
STEP: delete the pod
Feb 14 12:58:33.229: INFO: Waiting for pod downwardapi-volume-acbae93b-4f29-11ea-af88-0242ac110007 to disappear
Feb 14 12:58:33.255: INFO: Pod downwardapi-volume-acbae93b-4f29-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:58:33.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-sz5g9" for this suite.
Feb 14 12:58:41.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:58:41.583: INFO: namespace: e2e-tests-downward-api-sz5g9, resource: bindings, ignored listing per whitelist
Feb 14 12:58:41.689: INFO: namespace e2e-tests-downward-api-sz5g9 deletion completed in 8.249912222s

• [SLOW TEST:21.999 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:58:41.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Feb 14 12:58:42.188: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:58:42.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6s276" for this suite.
Feb 14 12:58:48.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 12:58:49.003: INFO: namespace: e2e-tests-kubectl-6s276, resource: bindings, ignored listing per whitelist
Feb 14 12:58:49.082: INFO: namespace e2e-tests-kubectl-6s276 deletion completed in 6.619134114s

• [SLOW TEST:7.392 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 12:58:49.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 14 12:59:13.635: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 12:59:13.650: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 12:59:15.651: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 12:59:15.670: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 12:59:17.651: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 12:59:17.676: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 12:59:19.651: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 12:59:19.669: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 12:59:21.651: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 12:59:21.666: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 12:59:23.651: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 12:59:23.674: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 12:59:25.651: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 12:59:25.665: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 12:59:27.651: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 12:59:27.667: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 12:59:29.651: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 12:59:29.665: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 12:59:31.651: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 12:59:32.231: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 12:59:33.651: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 12:59:33.665: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 12:59:35.651: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 12:59:35.667: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 12:59:37.651: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 12:59:37.667: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 12:59:37.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-p9mh9" for this suite.
Feb 14 13:00:01.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:00:01.870: INFO: namespace: e2e-tests-container-lifecycle-hook-p9mh9, resource: bindings, ignored listing per whitelist
Feb 14 13:00:01.960: INFO: namespace e2e-tests-container-lifecycle-hook-p9mh9 deletion completed in 24.255777163s

• [SLOW TEST:72.876 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 13:00:01.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-9mptc
Feb 14 13:00:14.244: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-9mptc
STEP: checking the pod's current state and verifying that restartCount is present
Feb 14 13:00:14.249: INFO: Initial restart count of pod liveness-http is 0
Feb 14 13:00:33.334: INFO: Restart count of pod e2e-tests-container-probe-9mptc/liveness-http is now 1 (19.085066982s elapsed)
Feb 14 13:00:53.603: INFO: Restart count of pod e2e-tests-container-probe-9mptc/liveness-http is now 2 (39.353284907s elapsed)
Feb 14 13:01:18.241: INFO: Restart count of pod e2e-tests-container-probe-9mptc/liveness-http is now 3 (1m3.991655467s elapsed)
Feb 14 13:01:32.605: INFO: Restart count of pod e2e-tests-container-probe-9mptc/liveness-http is now 4 (1m18.35560514s elapsed)
Feb 14 13:02:43.633: INFO: Restart count of pod e2e-tests-container-probe-9mptc/liveness-http is now 5 (2m29.383526019s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 13:02:43.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-9mptc" for this suite.
Feb 14 13:02:50.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:02:50.278: INFO: namespace: e2e-tests-container-probe-9mptc, resource: bindings, ignored listing per whitelist
Feb 14 13:02:50.322: INFO: namespace e2e-tests-container-probe-9mptc deletion completed in 6.415481974s

• [SLOW TEST:168.361 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 13:02:50.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 14 13:03:03.428: INFO: Successfully updated pod "pod-update-activedeadlineseconds-4e145ebb-4f2a-11ea-af88-0242ac110007"
Feb 14 13:03:03.428: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-4e145ebb-4f2a-11ea-af88-0242ac110007" in namespace "e2e-tests-pods-jb66d" to be "terminated due to deadline exceeded"
Feb 14 13:03:03.450: INFO: Pod "pod-update-activedeadlineseconds-4e145ebb-4f2a-11ea-af88-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 21.871343ms
Feb 14 13:03:06.727: INFO: Pod "pod-update-activedeadlineseconds-4e145ebb-4f2a-11ea-af88-0242ac110007": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 3.298855795s
Feb 14 13:03:06.728: INFO: Pod "pod-update-activedeadlineseconds-4e145ebb-4f2a-11ea-af88-0242ac110007" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 13:03:06.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-jb66d" for this suite.
Feb 14 13:03:17.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:03:17.527: INFO: namespace: e2e-tests-pods-jb66d, resource: bindings, ignored listing per whitelist
Feb 14 13:03:17.688: INFO: namespace e2e-tests-pods-jb66d deletion completed in 10.911917298s

• [SLOW TEST:27.365 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 13:03:17.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test hostPath mode
Feb 14 13:03:17.888: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-xn4nn" to be "success or failure"
Feb 14 13:03:17.894: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.402271ms
Feb 14 13:03:19.936: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047800325s
Feb 14 13:03:21.960: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072344987s
Feb 14 13:03:24.004: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116079799s
Feb 14 13:03:26.024: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.136140313s
Feb 14 13:03:28.038: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.149832942s
Feb 14 13:03:30.218: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 12.330603216s
Feb 14 13:03:32.231: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.343448383s
STEP: Saw pod success
Feb 14 13:03:32.231: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 14 13:03:32.246: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 14 13:03:32.689: INFO: Waiting for pod pod-host-path-test to disappear
Feb 14 13:03:32.705: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 13:03:32.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-hostpath-xn4nn" for this suite.
Feb 14 13:03:40.801: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:03:40.920: INFO: namespace: e2e-tests-hostpath-xn4nn, resource: bindings, ignored listing per whitelist
Feb 14 13:03:40.963: INFO: namespace e2e-tests-hostpath-xn4nn deletion completed in 8.251572916s

• [SLOW TEST:23.274 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 13:03:40.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 14 13:03:41.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-wdh7l'
Feb 14 13:03:45.521: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 14 13:03:45.522: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb 14 13:03:47.710: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-2bwrq]
Feb 14 13:03:47.710: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-2bwrq" in namespace "e2e-tests-kubectl-wdh7l" to be "running and ready"
Feb 14 13:03:47.730: INFO: Pod "e2e-test-nginx-rc-2bwrq": Phase="Pending", Reason="", readiness=false. Elapsed: 19.393877ms
Feb 14 13:03:50.911: INFO: Pod "e2e-test-nginx-rc-2bwrq": Phase="Pending", Reason="", readiness=false. Elapsed: 3.200794774s
Feb 14 13:03:52.945: INFO: Pod "e2e-test-nginx-rc-2bwrq": Phase="Pending", Reason="", readiness=false. Elapsed: 5.234380952s
Feb 14 13:03:54.962: INFO: Pod "e2e-test-nginx-rc-2bwrq": Phase="Pending", Reason="", readiness=false. Elapsed: 7.251119867s
Feb 14 13:03:57.341: INFO: Pod "e2e-test-nginx-rc-2bwrq": Phase="Pending", Reason="", readiness=false. Elapsed: 9.631022237s
Feb 14 13:03:59.360: INFO: Pod "e2e-test-nginx-rc-2bwrq": Phase="Pending", Reason="", readiness=false. Elapsed: 11.649261371s
Feb 14 13:04:01.401: INFO: Pod "e2e-test-nginx-rc-2bwrq": Phase="Pending", Reason="", readiness=false. Elapsed: 13.690732784s
Feb 14 13:04:03.416: INFO: Pod "e2e-test-nginx-rc-2bwrq": Phase="Running", Reason="", readiness=true. Elapsed: 15.705187058s
Feb 14 13:04:03.416: INFO: Pod "e2e-test-nginx-rc-2bwrq" satisfied condition "running and ready"
Feb 14 13:04:03.416: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-2bwrq]
Feb 14 13:04:03.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-wdh7l'
Feb 14 13:04:04.741: INFO: stderr: ""
Feb 14 13:04:04.741: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
Feb 14 13:04:04.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-wdh7l'
Feb 14 13:04:04.920: INFO: stderr: ""
Feb 14 13:04:04.920: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 13:04:04.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wdh7l" for this suite.
Feb 14 13:04:27.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:04:27.048: INFO: namespace: e2e-tests-kubectl-wdh7l, resource: bindings, ignored listing per whitelist
Feb 14 13:04:27.161: INFO: namespace e2e-tests-kubectl-wdh7l deletion completed in 22.186969681s

• [SLOW TEST:46.198 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 13:04:27.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 13:04:39.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-dxcfx" for this suite.
Feb 14 13:05:23.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:05:23.697: INFO: namespace: e2e-tests-kubelet-test-dxcfx, resource: bindings, ignored listing per whitelist
Feb 14 13:05:24.128: INFO: namespace e2e-tests-kubelet-test-dxcfx deletion completed in 44.593377761s

• [SLOW TEST:56.967 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 13:05:24.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Feb 14 13:05:24.423: INFO: Waiting up to 5m0s for pod "client-containers-a9cab2d4-4f2a-11ea-af88-0242ac110007" in namespace "e2e-tests-containers-glkgv" to be "success or failure"
Feb 14 13:05:24.443: INFO: Pod "client-containers-a9cab2d4-4f2a-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 19.814131ms
Feb 14 13:05:26.461: INFO: Pod "client-containers-a9cab2d4-4f2a-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03771861s
Feb 14 13:05:28.485: INFO: Pod "client-containers-a9cab2d4-4f2a-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060879468s
Feb 14 13:05:30.532: INFO: Pod "client-containers-a9cab2d4-4f2a-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108382768s
Feb 14 13:05:32.564: INFO: Pod "client-containers-a9cab2d4-4f2a-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.140817505s
Feb 14 13:05:34.594: INFO: Pod "client-containers-a9cab2d4-4f2a-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.170833048s
STEP: Saw pod success
Feb 14 13:05:34.595: INFO: Pod "client-containers-a9cab2d4-4f2a-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 13:05:34.605: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-a9cab2d4-4f2a-11ea-af88-0242ac110007 container test-container: 
STEP: delete the pod
Feb 14 13:05:34.794: INFO: Waiting for pod client-containers-a9cab2d4-4f2a-11ea-af88-0242ac110007 to disappear
Feb 14 13:05:34.815: INFO: Pod client-containers-a9cab2d4-4f2a-11ea-af88-0242ac110007 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 13:05:34.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-glkgv" for this suite.
Feb 14 13:05:41.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:05:41.263: INFO: namespace: e2e-tests-containers-glkgv, resource: bindings, ignored listing per whitelist
Feb 14 13:05:41.272: INFO: namespace e2e-tests-containers-glkgv deletion completed in 6.379688875s

• [SLOW TEST:17.142 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 13:05:41.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Feb 14 13:05:41.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qbctx'
Feb 14 13:05:42.315: INFO: stderr: ""
Feb 14 13:05:42.316: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 14 13:05:43.330: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:05:43.330: INFO: Found 0 / 1
Feb 14 13:05:44.332: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:05:44.332: INFO: Found 0 / 1
Feb 14 13:05:47.268: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:05:47.268: INFO: Found 0 / 1
Feb 14 13:05:47.561: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:05:47.561: INFO: Found 0 / 1
Feb 14 13:05:48.338: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:05:48.338: INFO: Found 0 / 1
Feb 14 13:05:49.345: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:05:49.346: INFO: Found 0 / 1
Feb 14 13:05:50.343: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:05:50.343: INFO: Found 0 / 1
Feb 14 13:05:52.163: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:05:52.164: INFO: Found 0 / 1
Feb 14 13:05:52.575: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:05:52.576: INFO: Found 0 / 1
Feb 14 13:05:53.331: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:05:53.331: INFO: Found 0 / 1
Feb 14 13:05:54.324: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:05:54.324: INFO: Found 0 / 1
Feb 14 13:05:55.343: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:05:55.344: INFO: Found 0 / 1
Feb 14 13:05:56.340: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:05:56.340: INFO: Found 1 / 1
Feb 14 13:05:56.340: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 14 13:05:56.345: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:05:56.345: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 14 13:05:56.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-rsqzz --namespace=e2e-tests-kubectl-qbctx -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 14 13:05:56.637: INFO: stderr: ""
Feb 14 13:05:56.638: INFO: stdout: "pod/redis-master-rsqzz patched\n"
STEP: checking annotations
Feb 14 13:05:56.725: INFO: Selector matched 1 pods for map[app:redis]
Feb 14 13:05:56.725: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 13:05:56.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qbctx" for this suite.
Feb 14 13:06:22.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:06:22.910: INFO: namespace: e2e-tests-kubectl-qbctx, resource: bindings, ignored listing per whitelist
Feb 14 13:06:23.108: INFO: namespace e2e-tests-kubectl-qbctx deletion completed in 26.361612852s

• [SLOW TEST:41.836 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 13:06:23.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 13:06:23.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-x8llf" for this suite.
Feb 14 13:06:47.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:06:47.736: INFO: namespace: e2e-tests-kubelet-test-x8llf, resource: bindings, ignored listing per whitelist
Feb 14 13:06:47.864: INFO: namespace e2e-tests-kubelet-test-x8llf deletion completed in 24.313589119s

• [SLOW TEST:24.755 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 13:06:47.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Feb 14 13:06:48.103: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb 14 13:06:54.786: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 14 13:07:06.851: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Feb 14 13:07:07.077: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-269rh,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-269rh/deployments/test-cleanup-deployment,UID:e6dd0e46-4f2a-11ea-a994-fa163e34d433,ResourceVersion:21650299,Generation:1,CreationTimestamp:2020-02-14 13:07:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb 14 13:07:07.091: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 13:07:07.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-269rh" for this suite.
Feb 14 13:07:15.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:07:15.652: INFO: namespace: e2e-tests-deployment-269rh, resource: bindings, ignored listing per whitelist
Feb 14 13:07:15.669: INFO: namespace e2e-tests-deployment-269rh deletion completed in 8.467314236s

• [SLOW TEST:27.804 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 13:07:15.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 13:07:16.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-d9rwt" for this suite.
Feb 14 13:07:22.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:07:22.641: INFO: namespace: e2e-tests-services-d9rwt, resource: bindings, ignored listing per whitelist
Feb 14 13:07:22.697: INFO: namespace e2e-tests-services-d9rwt deletion completed in 6.528575694s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:7.028 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 13:07:22.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb 14 13:07:41.329: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-f09b4c13-4f2a-11ea-af88-0242ac110007,GenerateName:,Namespace:e2e-tests-events-lg8vp,SelfLink:/api/v1/namespaces/e2e-tests-events-lg8vp/pods/send-events-f09b4c13-4f2a-11ea-af88-0242ac110007,UID:f09d0678-4f2a-11ea-a994-fa163e34d433,ResourceVersion:21650376,Generation:0,CreationTimestamp:2020-02-14 13:07:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 215177865,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-vgld6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgld6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-vgld6 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021fdbc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021fdbe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:07:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:07:40 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:07:40 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 13:07:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-02-14 13:07:23 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-14 13:07:38 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://beaf15574b5cc5782575295980b43cf12fcf50f48ccf732d3337a0a080b4274c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb 14 13:07:43.356: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb 14 13:07:45.440: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 13:07:45.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-lg8vp" for this suite.
Feb 14 13:08:25.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:08:25.729: INFO: namespace: e2e-tests-events-lg8vp, resource: bindings, ignored listing per whitelist
Feb 14 13:08:25.853: INFO: namespace e2e-tests-events-lg8vp deletion completed in 40.329214885s

• [SLOW TEST:63.156 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 13:08:25.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Feb 14 13:08:26.168: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix997374477/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 13:08:26.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5bsgk" for this suite.
Feb 14 13:08:32.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:08:32.561: INFO: namespace: e2e-tests-kubectl-5bsgk, resource: bindings, ignored listing per whitelist
Feb 14 13:08:32.565: INFO: namespace e2e-tests-kubectl-5bsgk deletion completed in 6.305590025s

• [SLOW TEST:6.711 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 13:08:32.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-1a921624-4f2b-11ea-af88-0242ac110007
STEP: Creating secret with name s-test-opt-upd-1a9216e7-4f2b-11ea-af88-0242ac110007
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-1a921624-4f2b-11ea-af88-0242ac110007
STEP: Updating secret s-test-opt-upd-1a9216e7-4f2b-11ea-af88-0242ac110007
STEP: Creating secret with name s-test-opt-create-1a921743-4f2b-11ea-af88-0242ac110007
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 13:10:12.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-b2m6x" for this suite.
Feb 14 13:10:38.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:10:38.349: INFO: namespace: e2e-tests-secrets-b2m6x, resource: bindings, ignored listing per whitelist
Feb 14 13:10:38.585: INFO: namespace e2e-tests-secrets-b2m6x deletion completed in 26.306406769s

• [SLOW TEST:126.019 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 13:10:38.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 14 13:10:39.262: INFO: Waiting up to 5m0s for pod "pod-656e92a1-4f2b-11ea-af88-0242ac110007" in namespace "e2e-tests-emptydir-qwhtg" to be "success or failure"
Feb 14 13:10:39.287: INFO: Pod "pod-656e92a1-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 24.339382ms
Feb 14 13:10:41.435: INFO: Pod "pod-656e92a1-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172032818s
Feb 14 13:10:45.759: INFO: Pod "pod-656e92a1-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.496297327s
Feb 14 13:10:47.840: INFO: Pod "pod-656e92a1-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.577980711s
Feb 14 13:10:50.222: INFO: Pod "pod-656e92a1-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.959503759s
Feb 14 13:10:52.620: INFO: Pod "pod-656e92a1-4f2b-11ea-af88-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 13.357864598s
Feb 14 13:10:54.636: INFO: Pod "pod-656e92a1-4f2b-11ea-af88-0242ac110007": Phase="Running", Reason="", readiness=true. Elapsed: 15.373140757s
Feb 14 13:10:56.777: INFO: Pod "pod-656e92a1-4f2b-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.514360683s
STEP: Saw pod success
Feb 14 13:10:56.777: INFO: Pod "pod-656e92a1-4f2b-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 13:10:56.782: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-656e92a1-4f2b-11ea-af88-0242ac110007 container test-container: 
STEP: delete the pod
Feb 14 13:10:57.560: INFO: Waiting for pod pod-656e92a1-4f2b-11ea-af88-0242ac110007 to disappear
Feb 14 13:10:57.575: INFO: Pod pod-656e92a1-4f2b-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 13:10:57.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-qwhtg" for this suite.
Feb 14 13:11:03.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:11:03.853: INFO: namespace: e2e-tests-emptydir-qwhtg, resource: bindings, ignored listing per whitelist
Feb 14 13:11:04.279: INFO: namespace e2e-tests-emptydir-qwhtg deletion completed in 6.692583955s

• [SLOW TEST:25.693 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 13:11:04.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Feb 14 13:11:04.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb 14 13:11:04.811: INFO: stderr: ""
Feb 14 13:11:04.811: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 13:11:04.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vg8hn" for this suite.
Feb 14 13:11:10.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:11:11.049: INFO: namespace: e2e-tests-kubectl-vg8hn, resource: bindings, ignored listing per whitelist
Feb 14 13:11:11.068: INFO: namespace e2e-tests-kubectl-vg8hn deletion completed in 6.233149293s

• [SLOW TEST:6.787 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 13:11:11.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Feb 14 13:11:11.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-tg5xg'
Feb 14 13:11:11.664: INFO: stderr: ""
Feb 14 13:11:11.664: INFO: stdout: "pod/pause created\n"
Feb 14 13:11:11.664: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb 14 13:11:11.665: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-tg5xg" to be "running and ready"
Feb 14 13:11:11.685: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 20.726128ms
Feb 14 13:11:13.697: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032005342s
Feb 14 13:11:15.728: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063354764s
Feb 14 13:11:18.142: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.477616412s
Feb 14 13:11:20.195: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.529904392s
Feb 14 13:11:20.195: INFO: Pod "pause" satisfied condition "running and ready"
Feb 14 13:11:20.195: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Feb 14 13:11:20.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-tg5xg'
Feb 14 13:11:20.406: INFO: stderr: ""
Feb 14 13:11:20.406: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb 14 13:11:20.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-tg5xg'
Feb 14 13:11:20.577: INFO: stderr: ""
Feb 14 13:11:20.578: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb 14 13:11:20.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-tg5xg'
Feb 14 13:11:20.717: INFO: stderr: ""
Feb 14 13:11:20.718: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb 14 13:11:20.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-tg5xg'
Feb 14 13:11:20.873: INFO: stderr: ""
Feb 14 13:11:20.874: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Feb 14 13:11:20.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-tg5xg'
Feb 14 13:11:21.081: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 13:11:21.081: INFO: stdout: "pod \"pause\" force deleted\n"
Feb 14 13:11:21.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-tg5xg'
Feb 14 13:11:21.322: INFO: stderr: "No resources found.\n"
Feb 14 13:11:21.322: INFO: stdout: ""
Feb 14 13:11:21.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-tg5xg -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 14 13:11:21.427: INFO: stderr: ""
Feb 14 13:11:21.427: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 13:11:21.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-tg5xg" for this suite.
Feb 14 13:11:28.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:11:28.980: INFO: namespace: e2e-tests-kubectl-tg5xg, resource: bindings, ignored listing per whitelist
Feb 14 13:11:30.061: INFO: namespace e2e-tests-kubectl-tg5xg deletion completed in 8.61571801s

• [SLOW TEST:18.993 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 13:11:30.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0214 13:12:18.586203       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 14 13:12:18.586: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 13:12:18.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-z4h58" for this suite.
Feb 14 13:12:32.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:12:32.995: INFO: namespace: e2e-tests-gc-z4h58, resource: bindings, ignored listing per whitelist
Feb 14 13:12:33.083: INFO: namespace e2e-tests-gc-z4h58 deletion completed in 14.459565137s

• [SLOW TEST:63.022 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 13:12:33.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 14 13:12:35.220: INFO: Waiting up to 5m0s for pod "pod-aa908d29-4f2b-11ea-af88-0242ac110007" in namespace "e2e-tests-emptydir-2dgfr" to be "success or failure"
Feb 14 13:12:35.421: INFO: Pod "pod-aa908d29-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 200.661585ms
Feb 14 13:12:37.509: INFO: Pod "pod-aa908d29-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289424055s
Feb 14 13:12:39.527: INFO: Pod "pod-aa908d29-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307362146s
Feb 14 13:12:41.538: INFO: Pod "pod-aa908d29-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.318136056s
Feb 14 13:12:43.583: INFO: Pod "pod-aa908d29-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.362862076s
Feb 14 13:12:45.668: INFO: Pod "pod-aa908d29-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 10.448212557s
Feb 14 13:12:47.683: INFO: Pod "pod-aa908d29-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 12.463099479s
Feb 14 13:12:49.853: INFO: Pod "pod-aa908d29-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 14.632488812s
Feb 14 13:12:51.965: INFO: Pod "pod-aa908d29-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 16.745326231s
Feb 14 13:12:54.380: INFO: Pod "pod-aa908d29-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 19.159674408s
Feb 14 13:12:56.642: INFO: Pod "pod-aa908d29-4f2b-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.421829181s
STEP: Saw pod success
Feb 14 13:12:56.642: INFO: Pod "pod-aa908d29-4f2b-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 13:12:56.656: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-aa908d29-4f2b-11ea-af88-0242ac110007 container test-container: 
STEP: delete the pod
Feb 14 13:12:57.384: INFO: Waiting for pod pod-aa908d29-4f2b-11ea-af88-0242ac110007 to disappear
Feb 14 13:12:57.396: INFO: Pod pod-aa908d29-4f2b-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 13:12:57.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-2dgfr" for this suite.
Feb 14 13:13:05.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:13:05.643: INFO: namespace: e2e-tests-emptydir-2dgfr, resource: bindings, ignored listing per whitelist
Feb 14 13:13:05.732: INFO: namespace e2e-tests-emptydir-2dgfr deletion completed in 8.324720539s

• [SLOW TEST:32.648 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 13:13:05.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 14 13:13:05.898: INFO: Waiting up to 5m0s for pod "pod-bcdb5ddf-4f2b-11ea-af88-0242ac110007" in namespace "e2e-tests-emptydir-dqt4p" to be "success or failure"
Feb 14 13:13:05.910: INFO: Pod "pod-bcdb5ddf-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.283129ms
Feb 14 13:13:07.920: INFO: Pod "pod-bcdb5ddf-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021724952s
Feb 14 13:13:09.948: INFO: Pod "pod-bcdb5ddf-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049838604s
Feb 14 13:13:12.040: INFO: Pod "pod-bcdb5ddf-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141460894s
Feb 14 13:13:14.060: INFO: Pod "pod-bcdb5ddf-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 8.162117009s
Feb 14 13:13:16.076: INFO: Pod "pod-bcdb5ddf-4f2b-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.177366032s
STEP: Saw pod success
Feb 14 13:13:16.076: INFO: Pod "pod-bcdb5ddf-4f2b-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 13:13:16.080: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-bcdb5ddf-4f2b-11ea-af88-0242ac110007 container test-container: 
STEP: delete the pod
Feb 14 13:13:16.269: INFO: Waiting for pod pod-bcdb5ddf-4f2b-11ea-af88-0242ac110007 to disappear
Feb 14 13:13:16.281: INFO: Pod pod-bcdb5ddf-4f2b-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 13:13:16.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-dqt4p" for this suite.
Feb 14 13:13:24.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:13:24.651: INFO: namespace: e2e-tests-emptydir-dqt4p, resource: bindings, ignored listing per whitelist
Feb 14 13:13:24.714: INFO: namespace e2e-tests-emptydir-dqt4p deletion completed in 8.408211127s

• [SLOW TEST:18.982 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Feb 14 13:13:24.715: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Feb 14 13:13:25.142: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c83f5f1a-4f2b-11ea-af88-0242ac110007" in namespace "e2e-tests-projected-pjkhr" to be "success or failure"
Feb 14 13:13:25.158: INFO: Pod "downwardapi-volume-c83f5f1a-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 15.253911ms
Feb 14 13:13:27.286: INFO: Pod "downwardapi-volume-c83f5f1a-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14303329s
Feb 14 13:13:29.303: INFO: Pod "downwardapi-volume-c83f5f1a-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160234586s
Feb 14 13:13:32.159: INFO: Pod "downwardapi-volume-c83f5f1a-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 7.016670505s
Feb 14 13:13:34.330: INFO: Pod "downwardapi-volume-c83f5f1a-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 9.187542524s
Feb 14 13:13:36.352: INFO: Pod "downwardapi-volume-c83f5f1a-4f2b-11ea-af88-0242ac110007": Phase="Pending", Reason="", readiness=false. Elapsed: 11.209862196s
Feb 14 13:13:38.370: INFO: Pod "downwardapi-volume-c83f5f1a-4f2b-11ea-af88-0242ac110007": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.227200048s
STEP: Saw pod success
Feb 14 13:13:38.370: INFO: Pod "downwardapi-volume-c83f5f1a-4f2b-11ea-af88-0242ac110007" satisfied condition "success or failure"
Feb 14 13:13:38.378: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c83f5f1a-4f2b-11ea-af88-0242ac110007 container client-container: 
STEP: delete the pod
Feb 14 13:13:39.305: INFO: Waiting for pod downwardapi-volume-c83f5f1a-4f2b-11ea-af88-0242ac110007 to disappear
Feb 14 13:13:39.721: INFO: Pod downwardapi-volume-c83f5f1a-4f2b-11ea-af88-0242ac110007 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Feb 14 13:13:39.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pjkhr" for this suite.
Feb 14 13:13:45.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 14 13:13:46.106: INFO: namespace: e2e-tests-projected-pjkhr, resource: bindings, ignored listing per whitelist
Feb 14 13:13:46.121: INFO: namespace e2e-tests-projected-pjkhr deletion completed in 6.373126297s

• [SLOW TEST:21.407 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSFeb 14 13:13:46.122: INFO: Running AfterSuite actions on all nodes
Feb 14 13:13:46.122: INFO: Running AfterSuite actions on node 1
Feb 14 13:13:46.122: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 8790.958 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS