I0120 10:47:14.618276 8 e2e.go:224] Starting e2e run "38129a3b-3b72-11ea-8bde-0242ac110005" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1579517234 - Will randomize all specs Will run 201 of 2164 specs Jan 20 10:47:14.807: INFO: >>> kubeConfig: /root/.kube/config Jan 20 10:47:14.809: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 20 10:47:14.828: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 20 10:47:14.859: INFO: 8 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 20 10:47:14.859: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 20 10:47:14.859: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 20 10:47:14.869: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 20 10:47:14.869: INFO: 1 / 1 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 20 10:47:14.869: INFO: e2e test version: v1.13.12 Jan 20 10:47:14.870: INFO: kube-apiserver version: v1.13.8 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 10:47:14.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test Jan 20 10:47:15.113: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-cxg4t STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 20 10:47:15.114: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 20 10:47:53.698: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-cxg4t PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 20 10:47:53.698: INFO: >>> kubeConfig: /root/.kube/config I0120 10:47:53.787902 8 log.go:172] (0xc0013f84d0) (0xc001821040) Create stream I0120 10:47:53.788058 8 log.go:172] (0xc0013f84d0) (0xc001821040) Stream added, broadcasting: 1 I0120 10:47:53.801114 8 log.go:172] (0xc0013f84d0) Reply frame received for 1 I0120 10:47:53.801234 8 log.go:172] (0xc0013f84d0) (0xc001262000) Create stream I0120 10:47:53.801256 8 log.go:172] (0xc0013f84d0) (0xc001262000) Stream added, broadcasting: 3 I0120 10:47:53.804291 8 log.go:172] (0xc0013f84d0) Reply frame received for 3 I0120 10:47:53.804366 8 log.go:172] (0xc0013f84d0) (0xc0012620a0) Create stream I0120 10:47:53.804378 8 log.go:172] (0xc0013f84d0) (0xc0012620a0) Stream added, broadcasting: 5 I0120 10:47:53.806603 8 log.go:172] (0xc0013f84d0) Reply frame received for 5 I0120 10:47:54.263597 8 log.go:172] (0xc0013f84d0) Data frame received for 3 I0120 10:47:54.263668 8 log.go:172] (0xc001262000) (3) Data frame handling I0120 10:47:54.263698 8 log.go:172] (0xc001262000) (3) Data frame sent I0120 10:47:54.431337 8 log.go:172] (0xc0013f84d0) (0xc001262000) Stream removed, broadcasting: 3 I0120 10:47:54.431586 8 log.go:172] (0xc0013f84d0) (0xc0012620a0) Stream removed, broadcasting: 5 I0120 10:47:54.431653 8 log.go:172] (0xc0013f84d0) Data frame received for 1 I0120 10:47:54.431668 8 log.go:172] (0xc001821040) (1) Data frame handling I0120 10:47:54.431690 8 log.go:172] (0xc001821040) (1) Data frame sent I0120 10:47:54.431705 8 log.go:172] (0xc0013f84d0) (0xc001821040) Stream removed, broadcasting: 1 I0120 10:47:54.431747 8 log.go:172] (0xc0013f84d0) Go away received I0120 10:47:54.432298 8 log.go:172] (0xc0013f84d0) (0xc001821040) Stream removed, broadcasting: 1 I0120 10:47:54.432315 8 log.go:172] (0xc0013f84d0) (0xc001262000) Stream removed, broadcasting: 3 I0120 10:47:54.432322 8 log.go:172] (0xc0013f84d0) (0xc0012620a0) Stream removed, broadcasting: 5 Jan 20 10:47:54.432: INFO: Found all expected endpoints: [netserver-0] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 10:47:54.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pod-network-test-cxg4t" for this suite. Jan 20 10:48:20.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 10:48:20.581: INFO: namespace: e2e-tests-pod-network-test-cxg4t, resource: bindings, ignored listing per whitelist Jan 20 10:48:20.704: INFO: namespace e2e-tests-pod-network-test-cxg4t deletion completed in 26.24685405s • [SLOW TEST:65.834 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 10:48:20.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-5fe4fd92-3b72-11ea-8bde-0242ac110005 STEP: Creating a pod to test consume secrets Jan 20 10:48:20.970: INFO: Waiting up to 5m0s for pod "pod-secrets-5fe6aafc-3b72-11ea-8bde-0242ac110005" in namespace "e2e-tests-secrets-ndstn" to be "success or failure" Jan 20 10:48:21.046: INFO: Pod "pod-secrets-5fe6aafc-3b72-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 75.768564ms Jan 20 10:48:23.069: INFO: Pod "pod-secrets-5fe6aafc-3b72-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09898956s Jan 20 10:48:25.097: INFO: Pod "pod-secrets-5fe6aafc-3b72-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126278756s Jan 20 10:48:27.112: INFO: Pod "pod-secrets-5fe6aafc-3b72-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141296511s Jan 20 10:48:29.124: INFO: Pod "pod-secrets-5fe6aafc-3b72-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.15344109s Jan 20 10:48:31.139: INFO: Pod "pod-secrets-5fe6aafc-3b72-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.168607291s STEP: Saw pod success Jan 20 10:48:31.139: INFO: Pod "pod-secrets-5fe6aafc-3b72-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 10:48:31.143: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-5fe6aafc-3b72-11ea-8bde-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 20 10:48:32.023: INFO: Waiting for pod pod-secrets-5fe6aafc-3b72-11ea-8bde-0242ac110005 to disappear Jan 20 10:48:32.402: INFO: Pod pod-secrets-5fe6aafc-3b72-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 10:48:32.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-ndstn" for this suite. Jan 20 10:48:38.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 10:48:38.807: INFO: namespace: e2e-tests-secrets-ndstn, resource: bindings, ignored listing per whitelist Jan 20 10:48:38.943: INFO: namespace e2e-tests-secrets-ndstn deletion completed in 6.502602876s • [SLOW TEST:18.239 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 10:48:38.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 20 10:48:39.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-ck44r' Jan 20 10:48:40.854: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 20 10:48:40.854: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jan 20 10:48:41.041: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-m5hsh] Jan 20 10:48:41.041: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-m5hsh" in namespace "e2e-tests-kubectl-ck44r" to be "running and ready" Jan 20 10:48:41.065: INFO: Pod "e2e-test-nginx-rc-m5hsh": Phase="Pending", Reason="", readiness=false. Elapsed: 23.810776ms Jan 20 10:48:43.648: INFO: Pod "e2e-test-nginx-rc-m5hsh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.606999639s Jan 20 10:48:45.663: INFO: Pod "e2e-test-nginx-rc-m5hsh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.6215145s Jan 20 10:48:47.681: INFO: Pod "e2e-test-nginx-rc-m5hsh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.640269419s Jan 20 10:48:49.726: INFO: Pod "e2e-test-nginx-rc-m5hsh": Phase="Running", Reason="", readiness=true. Elapsed: 8.684492735s Jan 20 10:48:49.726: INFO: Pod "e2e-test-nginx-rc-m5hsh" satisfied condition "running and ready" Jan 20 10:48:49.726: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-m5hsh] Jan 20 10:48:49.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ck44r' Jan 20 10:48:49.988: INFO: stderr: "" Jan 20 10:48:49.988: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Jan 20 10:48:49.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-ck44r' Jan 20 10:48:50.135: INFO: stderr: "" Jan 20 10:48:50.135: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 10:48:50.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ck44r" for this suite. Jan 20 10:49:14.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 10:49:14.331: INFO: namespace: e2e-tests-kubectl-ck44r, resource: bindings, ignored listing per whitelist Jan 20 10:49:15.490: INFO: namespace e2e-tests-kubectl-ck44r deletion completed in 25.348003396s • [SLOW TEST:36.547 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 10:49:15.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 20 10:49:45.699: INFO: Container started at 2020-01-20 10:49:22 +0000 UTC, pod became ready at 2020-01-20 10:49:44 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 10:49:45.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-kvxms" for this suite. Jan 20 10:50:09.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 10:50:09.878: INFO: namespace: e2e-tests-container-probe-kvxms, resource: bindings, ignored listing per whitelist Jan 20 10:50:09.997: INFO: namespace e2e-tests-container-probe-kvxms deletion completed in 24.291135759s • [SLOW TEST:54.507 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 10:50:09.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override arguments Jan 20 10:50:10.255: INFO: Waiting up to 5m0s for pod "client-containers-a109675d-3b72-11ea-8bde-0242ac110005" in namespace "e2e-tests-containers-glfdj" to be "success or failure" Jan 20 10:50:10.269: INFO: Pod "client-containers-a109675d-3b72-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.002036ms Jan 20 10:50:12.279: INFO: Pod "client-containers-a109675d-3b72-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023949911s Jan 20 10:50:14.294: INFO: Pod "client-containers-a109675d-3b72-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039545602s Jan 20 10:50:16.654: INFO: Pod "client-containers-a109675d-3b72-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.399048702s Jan 20 10:50:18.685: INFO: Pod "client-containers-a109675d-3b72-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.430442311s Jan 20 10:50:20.701: INFO: Pod "client-containers-a109675d-3b72-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.446223719s STEP: Saw pod success Jan 20 10:50:20.701: INFO: Pod "client-containers-a109675d-3b72-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 10:50:20.704: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-a109675d-3b72-11ea-8bde-0242ac110005 container test-container: STEP: delete the pod Jan 20 10:50:21.650: INFO: Waiting for pod client-containers-a109675d-3b72-11ea-8bde-0242ac110005 to disappear Jan 20 10:50:21.669: INFO: Pod client-containers-a109675d-3b72-11ea-8bde-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 10:50:21.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-glfdj" for this suite. Jan 20 10:50:27.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 10:50:28.181: INFO: namespace: e2e-tests-containers-glfdj, resource: bindings, ignored listing per whitelist Jan 20 10:50:28.275: INFO: namespace e2e-tests-containers-glfdj deletion completed in 6.441941627s • [SLOW TEST:18.277 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 10:50:28.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 10:51:28.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-rl56v" for this suite. Jan 20 10:51:54.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 10:51:54.717: INFO: namespace: e2e-tests-container-probe-rl56v, resource: bindings, ignored listing per whitelist Jan 20 10:51:54.800: INFO: namespace e2e-tests-container-probe-rl56v deletion completed in 26.175963534s • [SLOW TEST:86.525 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 10:51:54.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-df841511-3b72-11ea-8bde-0242ac110005 STEP: Creating a pod to test consume secrets Jan 20 10:51:55.183: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-df88787e-3b72-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-src8b" to be "success or failure" Jan 20 10:51:55.212: INFO: Pod "pod-projected-secrets-df88787e-3b72-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.689285ms Jan 20 10:51:57.222: INFO: Pod "pod-projected-secrets-df88787e-3b72-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039603425s Jan 20 10:51:59.243: INFO: Pod "pod-projected-secrets-df88787e-3b72-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060039138s Jan 20 10:52:01.603: INFO: Pod "pod-projected-secrets-df88787e-3b72-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.419785733s Jan 20 10:52:03.720: INFO: Pod "pod-projected-secrets-df88787e-3b72-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.536885859s Jan 20 10:52:06.006: INFO: Pod "pod-projected-secrets-df88787e-3b72-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.823229824s STEP: Saw pod success Jan 20 10:52:06.006: INFO: Pod "pod-projected-secrets-df88787e-3b72-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 10:52:06.015: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-df88787e-3b72-11ea-8bde-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Jan 20 10:52:06.264: INFO: Waiting for pod pod-projected-secrets-df88787e-3b72-11ea-8bde-0242ac110005 to disappear Jan 20 10:52:06.287: INFO: Pod pod-projected-secrets-df88787e-3b72-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 10:52:06.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-src8b" for this suite. Jan 20 10:52:12.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 10:52:12.502: INFO: namespace: e2e-tests-projected-src8b, resource: bindings, ignored listing per whitelist Jan 20 10:52:12.659: INFO: namespace e2e-tests-projected-src8b deletion completed in 6.354470652s • [SLOW TEST:17.858 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 10:52:12.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0120 10:52:23.120161 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 20 10:52:23.120: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 10:52:23.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-7ssll" for this suite. Jan 20 10:52:29.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 10:52:29.271: INFO: namespace: e2e-tests-gc-7ssll, resource: bindings, ignored listing per whitelist Jan 20 10:52:29.332: INFO: namespace e2e-tests-gc-7ssll deletion completed in 6.20579527s • [SLOW TEST:16.672 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 10:52:29.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jan 20 10:52:30.332: INFO: Pod name wrapped-volume-race-f47c84b4-3b72-11ea-8bde-0242ac110005: Found 0 pods out of 5 Jan 20 10:52:35.355: INFO: Pod name wrapped-volume-race-f47c84b4-3b72-11ea-8bde-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f47c84b4-3b72-11ea-8bde-0242ac110005 in namespace e2e-tests-emptydir-wrapper-6wxfg, will wait for the garbage collector to delete the pods Jan 20 10:54:21.549: INFO: Deleting ReplicationController wrapped-volume-race-f47c84b4-3b72-11ea-8bde-0242ac110005 took: 30.647868ms Jan 20 10:54:21.850: INFO: Terminating ReplicationController wrapped-volume-race-f47c84b4-3b72-11ea-8bde-0242ac110005 pods took: 301.012245ms STEP: Creating RC which spawns configmap-volume pods Jan 20 10:55:13.591: INFO: Pod name wrapped-volume-race-55c58bd1-3b73-11ea-8bde-0242ac110005: Found 0 pods out of 5 Jan 20 10:55:18.670: INFO: Pod name wrapped-volume-race-55c58bd1-3b73-11ea-8bde-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-55c58bd1-3b73-11ea-8bde-0242ac110005 in namespace e2e-tests-emptydir-wrapper-6wxfg, will wait for the garbage collector to delete the pods Jan 20 10:57:12.906: INFO: Deleting ReplicationController wrapped-volume-race-55c58bd1-3b73-11ea-8bde-0242ac110005 took: 27.248812ms Jan 20 10:57:13.207: INFO: Terminating ReplicationController wrapped-volume-race-55c58bd1-3b73-11ea-8bde-0242ac110005 pods took: 300.562255ms STEP: Creating RC which spawns configmap-volume pods Jan 20 10:57:57.088: INFO: Pod name wrapped-volume-race-b73321fb-3b73-11ea-8bde-0242ac110005: Found 0 pods out of 5 Jan 20 10:58:02.122: INFO: Pod name wrapped-volume-race-b73321fb-3b73-11ea-8bde-0242ac110005: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b73321fb-3b73-11ea-8bde-0242ac110005 in namespace e2e-tests-emptydir-wrapper-6wxfg, will wait for the garbage collector to delete the pods Jan 20 11:00:16.329: INFO: Deleting ReplicationController wrapped-volume-race-b73321fb-3b73-11ea-8bde-0242ac110005 took: 31.798534ms Jan 20 11:00:16.529: INFO: Terminating ReplicationController wrapped-volume-race-b73321fb-3b73-11ea-8bde-0242ac110005 pods took: 200.647018ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:01:06.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-6wxfg" for this suite. Jan 20 11:01:16.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:01:16.443: INFO: namespace: e2e-tests-emptydir-wrapper-6wxfg, resource: bindings, ignored listing per whitelist Jan 20 11:01:16.502: INFO: namespace e2e-tests-emptydir-wrapper-6wxfg deletion completed in 10.239142672s • [SLOW TEST:527.171 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:01:16.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with configMap that has name projected-configmap-test-upd-2e6404cf-3b74-11ea-8bde-0242ac110005 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-2e6404cf-3b74-11ea-8bde-0242ac110005 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:01:31.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-zml5n" for this suite. Jan 20 11:01:55.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:01:55.136: INFO: namespace: e2e-tests-projected-zml5n, resource: bindings, ignored listing per whitelist Jan 20 11:01:55.231: INFO: namespace e2e-tests-projected-zml5n deletion completed in 24.151123918s • [SLOW TEST:38.728 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:01:55.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 20 11:01:55.475: INFO: Waiting up to 5m0s for pod "pod-4563c6b9-3b74-11ea-8bde-0242ac110005" in namespace "e2e-tests-emptydir-jgfgh" to be "success or failure" Jan 20 11:01:55.495: INFO: Pod "pod-4563c6b9-3b74-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.588153ms Jan 20 11:01:57.987: INFO: Pod "pod-4563c6b9-3b74-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.511824818s Jan 20 11:02:00.003: INFO: Pod "pod-4563c6b9-3b74-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.528256927s Jan 20 11:02:02.029: INFO: Pod "pod-4563c6b9-3b74-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.553807531s Jan 20 11:02:04.158: INFO: Pod "pod-4563c6b9-3b74-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.68278219s Jan 20 11:02:06.167: INFO: Pod "pod-4563c6b9-3b74-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.692662637s STEP: Saw pod success Jan 20 11:02:06.168: INFO: Pod "pod-4563c6b9-3b74-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:02:06.171: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-4563c6b9-3b74-11ea-8bde-0242ac110005 container test-container: STEP: delete the pod Jan 20 11:02:07.109: INFO: Waiting for pod pod-4563c6b9-3b74-11ea-8bde-0242ac110005 to disappear Jan 20 11:02:07.134: INFO: Pod pod-4563c6b9-3b74-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:02:07.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-jgfgh" for this suite. Jan 20 11:02:13.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:02:13.519: INFO: namespace: e2e-tests-emptydir-jgfgh, resource: bindings, ignored listing per whitelist Jan 20 11:02:13.548: INFO: namespace e2e-tests-emptydir-jgfgh deletion completed in 6.407274852s • [SLOW TEST:18.317 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:02:13.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 20 11:02:13.862: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:02:15.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-custom-resource-definition-9g75z" for this suite. Jan 20 11:02:21.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:02:21.675: INFO: namespace: e2e-tests-custom-resource-definition-9g75z, resource: bindings, ignored listing per whitelist Jan 20 11:02:21.714: INFO: namespace e2e-tests-custom-resource-definition-9g75z deletion completed in 6.281127268s • [SLOW TEST:8.166 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:02:21.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a replication controller Jan 20 11:02:21.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:24.653: INFO: stderr: "" Jan 20 11:02:24.653: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 20 11:02:24.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:24.963: INFO: stderr: "" Jan 20 11:02:24.963: INFO: stdout: "update-demo-nautilus-rv6lz update-demo-nautilus-trsn2 " Jan 20 11:02:24.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rv6lz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:25.136: INFO: stderr: "" Jan 20 11:02:25.136: INFO: stdout: "" Jan 20 11:02:25.136: INFO: update-demo-nautilus-rv6lz is created but not running Jan 20 11:02:30.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:30.504: INFO: stderr: "" Jan 20 11:02:30.504: INFO: stdout: "update-demo-nautilus-rv6lz update-demo-nautilus-trsn2 " Jan 20 11:02:30.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rv6lz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:30.901: INFO: stderr: "" Jan 20 11:02:30.901: INFO: stdout: "" Jan 20 11:02:30.901: INFO: update-demo-nautilus-rv6lz is created but not running Jan 20 11:02:35.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:36.045: INFO: stderr: "" Jan 20 11:02:36.046: INFO: stdout: "update-demo-nautilus-rv6lz update-demo-nautilus-trsn2 " Jan 20 11:02:36.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rv6lz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:36.166: INFO: stderr: "" Jan 20 11:02:36.166: INFO: stdout: "true" Jan 20 11:02:36.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rv6lz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:36.269: INFO: stderr: "" Jan 20 11:02:36.270: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 20 11:02:36.270: INFO: validating pod update-demo-nautilus-rv6lz Jan 20 11:02:36.305: INFO: got data: { "image": "nautilus.jpg" } Jan 20 11:02:36.305: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 20 11:02:36.305: INFO: update-demo-nautilus-rv6lz is verified up and running Jan 20 11:02:36.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-trsn2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:36.451: INFO: stderr: "" Jan 20 11:02:36.451: INFO: stdout: "true" Jan 20 11:02:36.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-trsn2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:36.590: INFO: stderr: "" Jan 20 11:02:36.590: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 20 11:02:36.590: INFO: validating pod update-demo-nautilus-trsn2 Jan 20 11:02:36.613: INFO: got data: { "image": "nautilus.jpg" } Jan 20 11:02:36.613: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 20 11:02:36.613: INFO: update-demo-nautilus-trsn2 is verified up and running STEP: scaling down the replication controller Jan 20 11:02:36.616: INFO: scanned /root for discovery docs: Jan 20 11:02:36.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:37.984: INFO: stderr: "" Jan 20 11:02:37.984: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 20 11:02:37.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:38.635: INFO: stderr: "" Jan 20 11:02:38.635: INFO: stdout: "update-demo-nautilus-rv6lz update-demo-nautilus-trsn2 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 20 11:02:43.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:43.842: INFO: stderr: "" Jan 20 11:02:43.842: INFO: stdout: "update-demo-nautilus-rv6lz update-demo-nautilus-trsn2 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 20 11:02:48.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:49.010: INFO: stderr: "" Jan 20 11:02:49.010: INFO: stdout: "update-demo-nautilus-rv6lz update-demo-nautilus-trsn2 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 20 11:02:54.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:54.133: INFO: stderr: "" Jan 20 11:02:54.133: INFO: stdout: "update-demo-nautilus-rv6lz " Jan 20 11:02:54.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rv6lz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:54.229: INFO: stderr: "" Jan 20 11:02:54.229: INFO: stdout: "true" Jan 20 11:02:54.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rv6lz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:54.379: INFO: stderr: "" Jan 20 11:02:54.379: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 20 11:02:54.379: INFO: validating pod update-demo-nautilus-rv6lz Jan 20 11:02:54.389: INFO: got data: { "image": "nautilus.jpg" } Jan 20 11:02:54.389: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 20 11:02:54.389: INFO: update-demo-nautilus-rv6lz is verified up and running STEP: scaling up the replication controller Jan 20 11:02:54.392: INFO: scanned /root for discovery docs: Jan 20 11:02:54.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:55.947: INFO: stderr: "" Jan 20 11:02:55.947: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 20 11:02:55.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:56.097: INFO: stderr: "" Jan 20 11:02:56.097: INFO: stdout: "update-demo-nautilus-rv6lz update-demo-nautilus-wsv6b " Jan 20 11:02:56.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rv6lz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:56.219: INFO: stderr: "" Jan 20 11:02:56.220: INFO: stdout: "true" Jan 20 11:02:56.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rv6lz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:56.918: INFO: stderr: "" Jan 20 11:02:56.918: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 20 11:02:56.918: INFO: validating pod update-demo-nautilus-rv6lz Jan 20 11:02:56.943: INFO: got data: { "image": "nautilus.jpg" } Jan 20 11:02:56.943: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 20 11:02:56.944: INFO: update-demo-nautilus-rv6lz is verified up and running Jan 20 11:02:56.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wsv6b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:02:57.496: INFO: stderr: "" Jan 20 11:02:57.496: INFO: stdout: "" Jan 20 11:02:57.496: INFO: update-demo-nautilus-wsv6b is created but not running Jan 20 11:03:02.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:03:02.789: INFO: stderr: "" Jan 20 11:03:02.789: INFO: stdout: "update-demo-nautilus-rv6lz update-demo-nautilus-wsv6b " Jan 20 11:03:02.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rv6lz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:03:02.946: INFO: stderr: "" Jan 20 11:03:02.946: INFO: stdout: "true" Jan 20 11:03:02.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rv6lz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:03:03.078: INFO: stderr: "" Jan 20 11:03:03.078: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 20 11:03:03.078: INFO: validating pod update-demo-nautilus-rv6lz Jan 20 11:03:03.087: INFO: got data: { "image": "nautilus.jpg" } Jan 20 11:03:03.087: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 20 11:03:03.087: INFO: update-demo-nautilus-rv6lz is verified up and running Jan 20 11:03:03.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wsv6b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:03:03.208: INFO: stderr: "" Jan 20 11:03:03.208: INFO: stdout: "" Jan 20 11:03:03.208: INFO: update-demo-nautilus-wsv6b is created but not running Jan 20 11:03:08.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:03:08.404: INFO: stderr: "" Jan 20 11:03:08.404: INFO: stdout: "update-demo-nautilus-rv6lz update-demo-nautilus-wsv6b " Jan 20 11:03:08.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rv6lz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:03:08.599: INFO: stderr: "" Jan 20 11:03:08.599: INFO: stdout: "true" Jan 20 11:03:08.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rv6lz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:03:08.755: INFO: stderr: "" Jan 20 11:03:08.755: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 20 11:03:08.755: INFO: validating pod update-demo-nautilus-rv6lz Jan 20 11:03:08.767: INFO: got data: { "image": "nautilus.jpg" } Jan 20 11:03:08.767: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 20 11:03:08.767: INFO: update-demo-nautilus-rv6lz is verified up and running Jan 20 11:03:08.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wsv6b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:03:08.938: INFO: stderr: "" Jan 20 11:03:08.938: INFO: stdout: "true" Jan 20 11:03:08.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wsv6b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:03:09.039: INFO: stderr: "" Jan 20 11:03:09.039: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 20 11:03:09.039: INFO: validating pod update-demo-nautilus-wsv6b Jan 20 11:03:09.050: INFO: got data: { "image": "nautilus.jpg" } Jan 20 11:03:09.050: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 20 11:03:09.050: INFO: update-demo-nautilus-wsv6b is verified up and running STEP: using delete to clean up resources Jan 20 11:03:09.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:03:09.272: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 20 11:03:09.272: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 20 11:03:09.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-ssxcq' Jan 20 11:03:09.450: INFO: stderr: "No resources found.\n" Jan 20 11:03:09.450: INFO: stdout: "" Jan 20 11:03:09.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-ssxcq -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 20 11:03:09.664: INFO: stderr: "" Jan 20 11:03:09.664: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:03:09.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-ssxcq" for this suite. Jan 20 11:03:33.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:03:34.223: INFO: namespace: e2e-tests-kubectl-ssxcq, resource: bindings, ignored listing per whitelist Jan 20 11:03:34.230: INFO: namespace e2e-tests-kubectl-ssxcq deletion completed in 24.536133527s • [SLOW TEST:72.516 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:03:34.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 20 11:03:34.450: INFO: Waiting up to 5m0s for pod "downward-api-805ee839-3b74-11ea-8bde-0242ac110005" in namespace "e2e-tests-downward-api-229fv" to be "success or failure" Jan 20 11:03:34.457: INFO: Pod "downward-api-805ee839-3b74-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.667899ms Jan 20 11:03:36.480: INFO: Pod "downward-api-805ee839-3b74-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029208812s Jan 20 11:03:38.506: INFO: Pod "downward-api-805ee839-3b74-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055153403s Jan 20 11:03:40.541: INFO: Pod "downward-api-805ee839-3b74-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090813112s Jan 20 11:03:42.584: INFO: Pod "downward-api-805ee839-3b74-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.133180931s Jan 20 11:03:44.613: INFO: Pod "downward-api-805ee839-3b74-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.162468775s STEP: Saw pod success Jan 20 11:03:44.613: INFO: Pod "downward-api-805ee839-3b74-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:03:44.620: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-805ee839-3b74-11ea-8bde-0242ac110005 container dapi-container: STEP: delete the pod Jan 20 11:03:44.739: INFO: Waiting for pod downward-api-805ee839-3b74-11ea-8bde-0242ac110005 to disappear Jan 20 11:03:44.751: INFO: Pod downward-api-805ee839-3b74-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:03:44.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-229fv" for this suite. Jan 20 11:03:50.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:03:50.926: INFO: namespace: e2e-tests-downward-api-229fv, resource: bindings, ignored listing per whitelist Jan 20 11:03:50.991: INFO: namespace e2e-tests-downward-api-229fv deletion completed in 6.223440415s • [SLOW TEST:16.761 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:03:50.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 20 11:03:51.073: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:04:06.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-7vfp9" for this suite. Jan 20 11:04:14.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:04:14.973: INFO: namespace: e2e-tests-init-container-7vfp9, resource: bindings, ignored listing per whitelist Jan 20 11:04:15.136: INFO: namespace e2e-tests-init-container-7vfp9 deletion completed in 8.30932732s • [SLOW TEST:24.144 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:04:15.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 20 11:04:15.503: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 20 11:04:15.563: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 20 11:04:20.668: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 20 11:04:24.690: INFO: Creating deployment "test-rolling-update-deployment" Jan 20 11:04:24.702: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 20 11:04:24.755: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 20 11:04:26.778: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 20 11:04:26.970: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115064, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115064, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115065, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115064, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 11:04:28.987: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115064, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115064, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115065, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115064, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 11:04:30.987: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115064, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115064, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115065, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115064, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 11:04:33.014: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 20 11:04:33.441: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-68q89,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-68q89/deployments/test-rolling-update-deployment,UID:9e567e18-3b74-11ea-a994-fa163e34d433,ResourceVersion:18840164,Generation:1,CreationTimestamp:2020-01-20 11:04:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-20 11:04:24 +0000 UTC 2020-01-20 11:04:24 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-20 11:04:32 +0000 UTC 2020-01-20 11:04:24 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 20 11:04:33.448: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-68q89,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-68q89/replicasets/test-rolling-update-deployment-75db98fb4c,UID:9e64bf4e-3b74-11ea-a994-fa163e34d433,ResourceVersion:18840155,Generation:1,CreationTimestamp:2020-01-20 11:04:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 9e567e18-3b74-11ea-a994-fa163e34d433 0xc001f203c7 0xc001f203c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 20 11:04:33.448: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 20 11:04:33.448: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-68q89,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-68q89/replicasets/test-rolling-update-controller,UID:98dd9f25-3b74-11ea-a994-fa163e34d433,ResourceVersion:18840163,Generation:2,CreationTimestamp:2020-01-20 11:04:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 9e567e18-3b74-11ea-a994-fa163e34d433 0xc001f20237 0xc001f20238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 20 11:04:33.456: INFO: Pod "test-rolling-update-deployment-75db98fb4c-xtwsr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-xtwsr,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-68q89,SelfLink:/api/v1/namespaces/e2e-tests-deployment-68q89/pods/test-rolling-update-deployment-75db98fb4c-xtwsr,UID:9e69828a-3b74-11ea-a994-fa163e34d433,ResourceVersion:18840154,Generation:0,CreationTimestamp:2020-01-20 11:04:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c 9e64bf4e-3b74-11ea-a994-fa163e34d433 0xc001ccb747 0xc001ccb748}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-rfrvj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rfrvj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-rfrvj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001ccb840} {node.kubernetes.io/unreachable Exists NoExecute 0xc001ccb860}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:04:24 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:04:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:04:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:04:24 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-20 11:04:24 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-20 11:04:31 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://dcd1753528ae94a8e735590df0b113b811a0f9fefa7f015d938bc02f6dfe03fb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:04:33.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-68q89" for this suite. Jan 20 11:04:41.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:04:41.871: INFO: namespace: e2e-tests-deployment-68q89, resource: bindings, ignored listing per whitelist Jan 20 11:04:41.957: INFO: namespace e2e-tests-deployment-68q89 deletion completed in 8.492713284s • [SLOW TEST:26.821 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:04:41.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-spsrv A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-spsrv;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-spsrv A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-spsrv;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-spsrv.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-spsrv.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-spsrv.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-spsrv.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-spsrv.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-spsrv.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-spsrv.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-spsrv.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-spsrv.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-spsrv.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-spsrv.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-spsrv.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-spsrv.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 129.26.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.26.129_udp@PTR;check="$$(dig +tcp +noall +answer +search 129.26.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.26.129_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-spsrv A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-spsrv;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-spsrv A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-spsrv;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-spsrv.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-spsrv.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-spsrv.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-spsrv.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-spsrv.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-spsrv.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-spsrv.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-spsrv.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-spsrv.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-spsrv.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-spsrv.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-spsrv.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-spsrv.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 129.26.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.26.129_udp@PTR;check="$$(dig +tcp +noall +answer +search 129.26.104.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.104.26.129_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 20 11:04:57.066: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.072: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.077: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-spsrv from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.083: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-spsrv from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.089: INFO: Unable to read wheezy_udp@dns-test-service.e2e-tests-dns-spsrv.svc from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.099: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-spsrv.svc from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.104: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-spsrv.svc from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.111: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-spsrv.svc from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.120: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-spsrv.svc from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.178: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-spsrv.svc from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.184: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.189: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.193: INFO: Unable to read 10.104.26.129_udp@PTR from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.197: INFO: Unable to read 10.104.26.129_tcp@PTR from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.202: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.206: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.212: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-spsrv from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.217: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-spsrv from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.222: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-spsrv.svc from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.226: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-spsrv.svc from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.230: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-spsrv.svc from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.235: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-spsrv.svc from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.240: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-spsrv.svc from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.245: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-spsrv.svc from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.249: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.254: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.258: INFO: Unable to read 10.104.26.129_udp@PTR from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.264: INFO: Unable to read 10.104.26.129_tcp@PTR from pod e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005) Jan 20 11:04:57.264: INFO: Lookups using e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.e2e-tests-dns-spsrv wheezy_tcp@dns-test-service.e2e-tests-dns-spsrv wheezy_udp@dns-test-service.e2e-tests-dns-spsrv.svc wheezy_tcp@dns-test-service.e2e-tests-dns-spsrv.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-spsrv.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-spsrv.svc wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-spsrv.svc wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-spsrv.svc wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.104.26.129_udp@PTR 10.104.26.129_tcp@PTR jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-spsrv jessie_tcp@dns-test-service.e2e-tests-dns-spsrv jessie_udp@dns-test-service.e2e-tests-dns-spsrv.svc jessie_tcp@dns-test-service.e2e-tests-dns-spsrv.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-spsrv.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-spsrv.svc jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-spsrv.svc jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-spsrv.svc jessie_udp@PodARecord jessie_tcp@PodARecord 10.104.26.129_udp@PTR 10.104.26.129_tcp@PTR] Jan 20 11:05:02.456: INFO: DNS probes using e2e-tests-dns-spsrv/dns-test-a9239fa0-3b74-11ea-8bde-0242ac110005 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:05:02.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-dns-spsrv" for this suite. Jan 20 11:05:08.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:05:09.016: INFO: namespace: e2e-tests-dns-spsrv, resource: bindings, ignored listing per whitelist Jan 20 11:05:09.081: INFO: namespace e2e-tests-dns-spsrv deletion completed in 6.208944334s • [SLOW TEST:27.124 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:05:09.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 20 11:05:09.255: INFO: Creating deployment "test-recreate-deployment" Jan 20 11:05:09.262: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 20 11:05:09.269: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created Jan 20 11:05:11.287: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 20 11:05:11.290: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115109, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115109, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115109, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115109, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 11:05:13.305: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115109, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115109, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115109, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115109, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 11:05:15.304: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115109, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115109, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115109, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115109, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 11:05:17.316: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 20 11:05:17.342: INFO: Updating deployment test-recreate-deployment Jan 20 11:05:17.343: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 20 11:05:18.063: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-q2jqq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-q2jqq/deployments/test-recreate-deployment,UID:b8e69820-3b74-11ea-a994-fa163e34d433,ResourceVersion:18840333,Generation:2,CreationTimestamp:2020-01-20 11:05:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-20 11:05:17 +0000 UTC 2020-01-20 11:05:17 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-20 11:05:17 +0000 UTC 2020-01-20 11:05:09 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jan 20 11:05:18.081: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-q2jqq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-q2jqq/replicasets/test-recreate-deployment-589c4bfd,UID:bddc6713-3b74-11ea-a994-fa163e34d433,ResourceVersion:18840330,Generation:1,CreationTimestamp:2020-01-20 11:05:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment b8e69820-3b74-11ea-a994-fa163e34d433 0xc0019ff96f 0xc0019ff980}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 20 11:05:18.081: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 20 11:05:18.081: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-q2jqq,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-q2jqq/replicasets/test-recreate-deployment-5bf7f65dc,UID:b8e983c9-3b74-11ea-a994-fa163e34d433,ResourceVersion:18840321,Generation:2,CreationTimestamp:2020-01-20 11:05:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment b8e69820-3b74-11ea-a994-fa163e34d433 0xc0019ffa40 0xc0019ffa41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 20 11:05:18.108: INFO: Pod "test-recreate-deployment-589c4bfd-bkd7w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-bkd7w,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-q2jqq,SelfLink:/api/v1/namespaces/e2e-tests-deployment-q2jqq/pods/test-recreate-deployment-589c4bfd-bkd7w,UID:bdea7718-3b74-11ea-a994-fa163e34d433,ResourceVersion:18840334,Generation:0,CreationTimestamp:2020-01-20 11:05:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd bddc6713-3b74-11ea-a994-fa163e34d433 0xc000ccd70f 0xc000ccd720}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p9qxn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p9qxn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-p9qxn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ccd780} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ccd7a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:05:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:05:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:05:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:05:17 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-20 11:05:18 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:05:18.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-q2jqq" for this suite. Jan 20 11:05:28.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:05:28.323: INFO: namespace: e2e-tests-deployment-q2jqq, resource: bindings, ignored listing per whitelist Jan 20 11:05:28.378: INFO: namespace e2e-tests-deployment-q2jqq deletion completed in 10.25929323s • [SLOW TEST:19.296 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:05:28.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward api env vars Jan 20 11:05:28.764: INFO: Waiting up to 5m0s for pod "downward-api-c4803c3b-3b74-11ea-8bde-0242ac110005" in namespace "e2e-tests-downward-api-jhdhw" to be "success or failure" Jan 20 11:05:28.778: INFO: Pod "downward-api-c4803c3b-3b74-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.487886ms Jan 20 11:05:30.792: INFO: Pod "downward-api-c4803c3b-3b74-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028425504s Jan 20 11:05:33.327: INFO: Pod "downward-api-c4803c3b-3b74-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.562517383s Jan 20 11:05:35.352: INFO: Pod "downward-api-c4803c3b-3b74-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.588122524s Jan 20 11:05:37.385: INFO: Pod "downward-api-c4803c3b-3b74-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.620475499s STEP: Saw pod success Jan 20 11:05:37.385: INFO: Pod "downward-api-c4803c3b-3b74-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:05:37.391: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-c4803c3b-3b74-11ea-8bde-0242ac110005 container dapi-container: STEP: delete the pod Jan 20 11:05:37.506: INFO: Waiting for pod downward-api-c4803c3b-3b74-11ea-8bde-0242ac110005 to disappear Jan 20 11:05:37.712: INFO: Pod downward-api-c4803c3b-3b74-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:05:37.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-jhdhw" for this suite. Jan 20 11:05:43.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:05:43.887: INFO: namespace: e2e-tests-downward-api-jhdhw, resource: bindings, ignored listing per whitelist Jan 20 11:05:44.176: INFO: namespace e2e-tests-downward-api-jhdhw deletion completed in 6.445980222s • [SLOW TEST:15.798 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:05:44.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 20 11:05:44.614: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:06:07.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-942cc" for this suite. Jan 20 11:06:31.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:06:31.759: INFO: namespace: e2e-tests-init-container-942cc, resource: bindings, ignored listing per whitelist Jan 20 11:06:31.913: INFO: namespace e2e-tests-init-container-942cc deletion completed in 24.364296381s • [SLOW TEST:47.737 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:06:31.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-ea47868c-3b74-11ea-8bde-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 20 11:06:32.136: INFO: Waiting up to 5m0s for pod "pod-configmaps-ea485db5-3b74-11ea-8bde-0242ac110005" in namespace "e2e-tests-configmap-8dgt7" to be "success or failure" Jan 20 11:06:32.144: INFO: Pod "pod-configmaps-ea485db5-3b74-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.270937ms Jan 20 11:06:34.157: INFO: Pod "pod-configmaps-ea485db5-3b74-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021294758s Jan 20 11:06:36.167: INFO: Pod "pod-configmaps-ea485db5-3b74-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031448451s Jan 20 11:06:38.214: INFO: Pod "pod-configmaps-ea485db5-3b74-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078467335s Jan 20 11:06:40.232: INFO: Pod "pod-configmaps-ea485db5-3b74-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096006529s Jan 20 11:06:42.273: INFO: Pod "pod-configmaps-ea485db5-3b74-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.137349717s STEP: Saw pod success Jan 20 11:06:42.273: INFO: Pod "pod-configmaps-ea485db5-3b74-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:06:42.282: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-ea485db5-3b74-11ea-8bde-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 20 11:06:42.533: INFO: Waiting for pod pod-configmaps-ea485db5-3b74-11ea-8bde-0242ac110005 to disappear Jan 20 11:06:42.549: INFO: Pod pod-configmaps-ea485db5-3b74-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:06:42.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-8dgt7" for this suite. Jan 20 11:06:48.613: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:06:48.753: INFO: namespace: e2e-tests-configmap-8dgt7, resource: bindings, ignored listing per whitelist Jan 20 11:06:48.761: INFO: namespace e2e-tests-configmap-8dgt7 deletion completed in 6.202136968s • [SLOW TEST:16.847 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:06:48.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating the pod Jan 20 11:06:57.653: INFO: Successfully updated pod "annotationupdatef4619a6b-3b74-11ea-8bde-0242ac110005" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:07:01.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-qz5g2" for this suite. Jan 20 11:07:25.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:07:26.064: INFO: namespace: e2e-tests-downward-api-qz5g2, resource: bindings, ignored listing per whitelist Jan 20 11:07:26.620: INFO: namespace e2e-tests-downward-api-qz5g2 deletion completed in 24.834311604s • [SLOW TEST:37.859 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:07:26.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: validating cluster-info Jan 20 11:07:27.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jan 20 11:07:27.265: INFO: stderr: "" Jan 20 11:07:27.265: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.212:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:07:27.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gq9rq" for this suite. Jan 20 11:07:33.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:07:33.533: INFO: namespace: e2e-tests-kubectl-gq9rq, resource: bindings, ignored listing per whitelist Jan 20 11:07:33.608: INFO: namespace e2e-tests-kubectl-gq9rq deletion completed in 6.337543668s • [SLOW TEST:6.987 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:07:33.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0644 on node default medium Jan 20 11:07:33.896: INFO: Waiting up to 5m0s for pod "pod-0f0d5d5d-3b75-11ea-8bde-0242ac110005" in namespace "e2e-tests-emptydir-76hx2" to be "success or failure" Jan 20 11:07:33.930: INFO: Pod "pod-0f0d5d5d-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.464574ms Jan 20 11:07:35.958: INFO: Pod "pod-0f0d5d5d-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062070164s Jan 20 11:07:37.971: INFO: Pod "pod-0f0d5d5d-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074821821s Jan 20 11:07:39.994: INFO: Pod "pod-0f0d5d5d-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098221299s Jan 20 11:07:42.132: INFO: Pod "pod-0f0d5d5d-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.235700335s Jan 20 11:07:44.151: INFO: Pod "pod-0f0d5d5d-3b75-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.255659113s STEP: Saw pod success Jan 20 11:07:44.152: INFO: Pod "pod-0f0d5d5d-3b75-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:07:44.174: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-0f0d5d5d-3b75-11ea-8bde-0242ac110005 container test-container: STEP: delete the pod Jan 20 11:07:44.268: INFO: Waiting for pod pod-0f0d5d5d-3b75-11ea-8bde-0242ac110005 to disappear Jan 20 11:07:44.548: INFO: Pod pod-0f0d5d5d-3b75-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:07:44.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-76hx2" for this suite. Jan 20 11:07:50.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:07:50.793: INFO: namespace: e2e-tests-emptydir-76hx2, resource: bindings, ignored listing per whitelist Jan 20 11:07:50.808: INFO: namespace e2e-tests-emptydir-76hx2 deletion completed in 6.232987323s • [SLOW TEST:17.199 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:07:50.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-map-1957acd9-3b75-11ea-8bde-0242ac110005 STEP: Creating a pod to test consume secrets Jan 20 11:07:51.149: INFO: Waiting up to 5m0s for pod "pod-secrets-195a071c-3b75-11ea-8bde-0242ac110005" in namespace "e2e-tests-secrets-lpsrk" to be "success or failure" Jan 20 11:07:51.194: INFO: Pod "pod-secrets-195a071c-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 44.642061ms Jan 20 11:07:53.208: INFO: Pod "pod-secrets-195a071c-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058496333s Jan 20 11:07:55.241: INFO: Pod "pod-secrets-195a071c-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09135165s Jan 20 11:07:57.809: INFO: Pod "pod-secrets-195a071c-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.659403096s Jan 20 11:07:59.829: INFO: Pod "pod-secrets-195a071c-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.679480446s Jan 20 11:08:01.949: INFO: Pod "pod-secrets-195a071c-3b75-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.799515964s STEP: Saw pod success Jan 20 11:08:01.949: INFO: Pod "pod-secrets-195a071c-3b75-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:08:01.958: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-195a071c-3b75-11ea-8bde-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 20 11:08:02.087: INFO: Waiting for pod pod-secrets-195a071c-3b75-11ea-8bde-0242ac110005 to disappear Jan 20 11:08:02.098: INFO: Pod pod-secrets-195a071c-3b75-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:08:02.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-lpsrk" for this suite. Jan 20 11:08:08.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:08:08.298: INFO: namespace: e2e-tests-secrets-lpsrk, resource: bindings, ignored listing per whitelist Jan 20 11:08:08.348: INFO: namespace e2e-tests-secrets-lpsrk deletion completed in 6.241398542s • [SLOW TEST:17.540 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:08:08.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 20 11:08:08.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-h7jrp' Jan 20 11:08:08.896: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 20 11:08:08.897: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jan 20 11:08:08.905: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jan 20 11:08:08.937: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jan 20 11:08:09.085: INFO: scanned /root for discovery docs: Jan 20 11:08:09.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-h7jrp' Jan 20 11:08:32.938: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 20 11:08:32.938: INFO: stdout: "Created e2e-test-nginx-rc-fec9012a62e9a85f0ed51d1f0320a86d\nScaling up e2e-test-nginx-rc-fec9012a62e9a85f0ed51d1f0320a86d from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-fec9012a62e9a85f0ed51d1f0320a86d up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-fec9012a62e9a85f0ed51d1f0320a86d to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jan 20 11:08:32.938: INFO: stdout: "Created e2e-test-nginx-rc-fec9012a62e9a85f0ed51d1f0320a86d\nScaling up e2e-test-nginx-rc-fec9012a62e9a85f0ed51d1f0320a86d from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-fec9012a62e9a85f0ed51d1f0320a86d up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-fec9012a62e9a85f0ed51d1f0320a86d to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jan 20 11:08:32.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-h7jrp' Jan 20 11:08:33.078: INFO: stderr: "" Jan 20 11:08:33.078: INFO: stdout: "e2e-test-nginx-rc-fec9012a62e9a85f0ed51d1f0320a86d-hzrnd " Jan 20 11:08:33.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-fec9012a62e9a85f0ed51d1f0320a86d-hzrnd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h7jrp' Jan 20 11:08:33.223: INFO: stderr: "" Jan 20 11:08:33.223: INFO: stdout: "true" Jan 20 11:08:33.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-fec9012a62e9a85f0ed51d1f0320a86d-hzrnd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h7jrp' Jan 20 11:08:33.371: INFO: stderr: "" Jan 20 11:08:33.371: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jan 20 11:08:33.371: INFO: e2e-test-nginx-rc-fec9012a62e9a85f0ed51d1f0320a86d-hzrnd is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Jan 20 11:08:33.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-h7jrp' Jan 20 11:08:33.545: INFO: stderr: "" Jan 20 11:08:33.545: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:08:33.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-h7jrp" for this suite. Jan 20 11:08:57.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:08:57.746: INFO: namespace: e2e-tests-kubectl-h7jrp, resource: bindings, ignored listing per whitelist Jan 20 11:08:57.879: INFO: namespace e2e-tests-kubectl-h7jrp deletion completed in 24.304823164s • [SLOW TEST:49.531 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:08:57.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-map-41508422-3b75-11ea-8bde-0242ac110005 STEP: Creating a pod to test consume secrets Jan 20 11:08:58.139: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-415131f3-3b75-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-7lpgk" to be "success or failure" Jan 20 11:08:58.153: INFO: Pod "pod-projected-secrets-415131f3-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.983709ms Jan 20 11:09:00.498: INFO: Pod "pod-projected-secrets-415131f3-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.359016879s Jan 20 11:09:02.517: INFO: Pod "pod-projected-secrets-415131f3-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.377967934s Jan 20 11:09:04.535: INFO: Pod "pod-projected-secrets-415131f3-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.396524274s Jan 20 11:09:06.636: INFO: Pod "pod-projected-secrets-415131f3-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.497627051s Jan 20 11:09:08.653: INFO: Pod "pod-projected-secrets-415131f3-3b75-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.514136433s STEP: Saw pod success Jan 20 11:09:08.653: INFO: Pod "pod-projected-secrets-415131f3-3b75-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:09:08.657: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-415131f3-3b75-11ea-8bde-0242ac110005 container projected-secret-volume-test: STEP: delete the pod Jan 20 11:09:08.729: INFO: Waiting for pod pod-projected-secrets-415131f3-3b75-11ea-8bde-0242ac110005 to disappear Jan 20 11:09:08.734: INFO: Pod pod-projected-secrets-415131f3-3b75-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:09:08.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-7lpgk" for this suite. Jan 20 11:09:14.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:09:14.932: INFO: namespace: e2e-tests-projected-7lpgk, resource: bindings, ignored listing per whitelist Jan 20 11:09:14.936: INFO: namespace e2e-tests-projected-7lpgk deletion completed in 6.194571494s • [SLOW TEST:17.056 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:09:14.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:09:25.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-wrapper-qksqz" for this suite. Jan 20 11:09:31.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:09:31.834: INFO: namespace: e2e-tests-emptydir-wrapper-qksqz, resource: bindings, ignored listing per whitelist Jan 20 11:09:31.861: INFO: namespace e2e-tests-emptydir-wrapper-qksqz deletion completed in 6.342673454s • [SLOW TEST:16.925 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:09:31.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 20 11:09:32.064: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5589d3ae-3b75-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-m8wzm" to be "success or failure" Jan 20 11:09:32.091: INFO: Pod "downwardapi-volume-5589d3ae-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 27.080149ms Jan 20 11:09:34.152: INFO: Pod "downwardapi-volume-5589d3ae-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088349705s Jan 20 11:09:36.162: INFO: Pod "downwardapi-volume-5589d3ae-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098299334s Jan 20 11:09:38.181: INFO: Pod "downwardapi-volume-5589d3ae-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117521558s Jan 20 11:09:40.195: INFO: Pod "downwardapi-volume-5589d3ae-3b75-11ea-8bde-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.131804955s Jan 20 11:09:42.210: INFO: Pod "downwardapi-volume-5589d3ae-3b75-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.146852471s STEP: Saw pod success Jan 20 11:09:42.211: INFO: Pod "downwardapi-volume-5589d3ae-3b75-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:09:42.214: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5589d3ae-3b75-11ea-8bde-0242ac110005 container client-container: STEP: delete the pod Jan 20 11:09:42.282: INFO: Waiting for pod downwardapi-volume-5589d3ae-3b75-11ea-8bde-0242ac110005 to disappear Jan 20 11:09:42.324: INFO: Pod downwardapi-volume-5589d3ae-3b75-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:09:42.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-m8wzm" for this suite. Jan 20 11:09:48.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:09:48.737: INFO: namespace: e2e-tests-projected-m8wzm, resource: bindings, ignored listing per whitelist Jan 20 11:09:48.815: INFO: namespace e2e-tests-projected-m8wzm deletion completed in 6.399471312s • [SLOW TEST:16.953 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:09:48.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 20 11:10:09.457: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 20 11:10:09.499: INFO: Pod pod-with-prestop-http-hook still exists Jan 20 11:10:11.499: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 20 11:10:11.525: INFO: Pod pod-with-prestop-http-hook still exists Jan 20 11:10:13.499: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 20 11:10:13.516: INFO: Pod pod-with-prestop-http-hook still exists Jan 20 11:10:15.500: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 20 11:10:15.516: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:10:15.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-xvd4f" for this suite. Jan 20 11:10:39.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:10:39.736: INFO: namespace: e2e-tests-container-lifecycle-hook-xvd4f, resource: bindings, ignored listing per whitelist Jan 20 11:10:39.785: INFO: namespace e2e-tests-container-lifecycle-hook-xvd4f deletion completed in 24.224859776s • [SLOW TEST:50.970 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:10:39.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 20 11:10:40.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-hhqw4' Jan 20 11:10:40.183: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 20 11:10:40.183: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Jan 20 11:10:40.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-hhqw4' Jan 20 11:10:40.388: INFO: stderr: "" Jan 20 11:10:40.388: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:10:40.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-hhqw4" for this suite. Jan 20 11:10:48.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:10:48.697: INFO: namespace: e2e-tests-kubectl-hhqw4, resource: bindings, ignored listing per whitelist Jan 20 11:10:48.725: INFO: namespace e2e-tests-kubectl-hhqw4 deletion completed in 8.32311391s • [SLOW TEST:8.939 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:10:48.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-83555872-3b75-11ea-8bde-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 20 11:10:48.995: INFO: Waiting up to 5m0s for pod "pod-configmaps-8357c142-3b75-11ea-8bde-0242ac110005" in namespace "e2e-tests-configmap-nf75h" to be "success or failure" Jan 20 11:10:49.005: INFO: Pod "pod-configmaps-8357c142-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.819739ms Jan 20 11:10:51.112: INFO: Pod "pod-configmaps-8357c142-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117412232s Jan 20 11:10:53.135: INFO: Pod "pod-configmaps-8357c142-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139948647s Jan 20 11:10:55.156: INFO: Pod "pod-configmaps-8357c142-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.160948507s Jan 20 11:10:57.167: INFO: Pod "pod-configmaps-8357c142-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.172549319s Jan 20 11:10:59.253: INFO: Pod "pod-configmaps-8357c142-3b75-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.258087714s STEP: Saw pod success Jan 20 11:10:59.253: INFO: Pod "pod-configmaps-8357c142-3b75-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:10:59.260: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-8357c142-3b75-11ea-8bde-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 20 11:10:59.723: INFO: Waiting for pod pod-configmaps-8357c142-3b75-11ea-8bde-0242ac110005 to disappear Jan 20 11:10:59.747: INFO: Pod pod-configmaps-8357c142-3b75-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:10:59.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-nf75h" for this suite. Jan 20 11:11:05.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:11:05.888: INFO: namespace: e2e-tests-configmap-nf75h, resource: bindings, ignored listing per whitelist Jan 20 11:11:05.921: INFO: namespace e2e-tests-configmap-nf75h deletion completed in 6.166990905s • [SLOW TEST:17.196 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:11:05.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 20 11:11:06.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-4bckx' Jan 20 11:11:06.273: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 20 11:11:06.273: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268 Jan 20 11:11:08.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-4bckx' Jan 20 11:11:08.565: INFO: stderr: "" Jan 20 11:11:08.565: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:11:08.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-4bckx" for this suite. Jan 20 11:11:15.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:11:15.225: INFO: namespace: e2e-tests-kubectl-4bckx, resource: bindings, ignored listing per whitelist Jan 20 11:11:15.235: INFO: namespace e2e-tests-kubectl-4bckx deletion completed in 6.508782313s • [SLOW TEST:9.314 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:11:15.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test substitution in container's command Jan 20 11:11:15.451: INFO: Waiting up to 5m0s for pod "var-expansion-93272674-3b75-11ea-8bde-0242ac110005" in namespace "e2e-tests-var-expansion-qnvxl" to be "success or failure" Jan 20 11:11:15.460: INFO: Pod "var-expansion-93272674-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.4994ms Jan 20 11:11:17.483: INFO: Pod "var-expansion-93272674-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031724759s Jan 20 11:11:19.501: INFO: Pod "var-expansion-93272674-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049287989s Jan 20 11:11:21.676: INFO: Pod "var-expansion-93272674-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.224939101s Jan 20 11:11:23.715: INFO: Pod "var-expansion-93272674-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.263640388s Jan 20 11:11:25.727: INFO: Pod "var-expansion-93272674-3b75-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.275611236s STEP: Saw pod success Jan 20 11:11:25.727: INFO: Pod "var-expansion-93272674-3b75-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:11:25.732: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-93272674-3b75-11ea-8bde-0242ac110005 container dapi-container: STEP: delete the pod Jan 20 11:11:26.091: INFO: Waiting for pod var-expansion-93272674-3b75-11ea-8bde-0242ac110005 to disappear Jan 20 11:11:26.109: INFO: Pod var-expansion-93272674-3b75-11ea-8bde-0242ac110005 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:11:26.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-var-expansion-qnvxl" for this suite. Jan 20 11:11:32.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:11:32.429: INFO: namespace: e2e-tests-var-expansion-qnvxl, resource: bindings, ignored listing per whitelist Jan 20 11:11:32.676: INFO: namespace e2e-tests-var-expansion-qnvxl deletion completed in 6.55056031s • [SLOW TEST:17.441 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:11:32.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jan 20 11:11:40.790: INFO: 10 pods remaining Jan 20 11:11:40.791: INFO: 8 pods has nil DeletionTimestamp Jan 20 11:11:40.791: INFO: Jan 20 11:11:41.360: INFO: 3 pods remaining Jan 20 11:11:41.361: INFO: 1 pods has nil DeletionTimestamp Jan 20 11:11:41.361: INFO: STEP: Gathering metrics W0120 11:11:42.322980 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 20 11:11:42.323: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:11:42.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-ljf2l" for this suite. Jan 20 11:11:56.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:11:56.413: INFO: namespace: e2e-tests-gc-ljf2l, resource: bindings, ignored listing per whitelist Jan 20 11:11:56.630: INFO: namespace e2e-tests-gc-ljf2l deletion completed in 14.30300258s • [SLOW TEST:23.953 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:11:56.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-abcd39ec-3b75-11ea-8bde-0242ac110005 STEP: Creating secret with name secret-projected-all-test-volume-abcd39d2-3b75-11ea-8bde-0242ac110005 STEP: Creating a pod to test Check all projections for projected volume plugin Jan 20 11:11:56.915: INFO: Waiting up to 5m0s for pod "projected-volume-abcd3880-3b75-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-t7kf6" to be "success or failure" Jan 20 11:11:57.183: INFO: Pod "projected-volume-abcd3880-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 267.401284ms Jan 20 11:11:59.196: INFO: Pod "projected-volume-abcd3880-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.280597844s Jan 20 11:12:01.218: INFO: Pod "projected-volume-abcd3880-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.303154069s Jan 20 11:12:04.076: INFO: Pod "projected-volume-abcd3880-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.161201538s Jan 20 11:12:06.086: INFO: Pod "projected-volume-abcd3880-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.170961501s Jan 20 11:12:08.102: INFO: Pod "projected-volume-abcd3880-3b75-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.18712834s STEP: Saw pod success Jan 20 11:12:08.103: INFO: Pod "projected-volume-abcd3880-3b75-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:12:08.107: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod projected-volume-abcd3880-3b75-11ea-8bde-0242ac110005 container projected-all-volume-test: STEP: delete the pod Jan 20 11:12:08.194: INFO: Waiting for pod projected-volume-abcd3880-3b75-11ea-8bde-0242ac110005 to disappear Jan 20 11:12:08.210: INFO: Pod projected-volume-abcd3880-3b75-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:12:08.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-t7kf6" for this suite. Jan 20 11:12:16.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:12:16.432: INFO: namespace: e2e-tests-projected-t7kf6, resource: bindings, ignored listing per whitelist Jan 20 11:12:16.540: INFO: namespace e2e-tests-projected-t7kf6 deletion completed in 8.268857607s • [SLOW TEST:19.910 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:12:16.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 STEP: Creating service test in namespace e2e-tests-statefulset-w44tl [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-w44tl STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-w44tl STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-w44tl STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-w44tl Jan 20 11:12:26.889: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-w44tl, name: ss-0, uid: bb96e402-3b75-11ea-a994-fa163e34d433, status phase: Pending. Waiting for statefulset controller to delete. Jan 20 11:12:32.492: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-w44tl, name: ss-0, uid: bb96e402-3b75-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Jan 20 11:12:32.628: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-w44tl, name: ss-0, uid: bb96e402-3b75-11ea-a994-fa163e34d433, status phase: Failed. Waiting for statefulset controller to delete. Jan 20 11:12:32.665: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-w44tl STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-w44tl STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-w44tl and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 Jan 20 11:12:45.081: INFO: Deleting all statefulset in ns e2e-tests-statefulset-w44tl Jan 20 11:12:45.086: INFO: Scaling statefulset ss to 0 Jan 20 11:12:55.127: INFO: Waiting for statefulset status.replicas updated to 0 Jan 20 11:12:55.151: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:12:55.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-statefulset-w44tl" for this suite. Jan 20 11:13:03.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:13:03.449: INFO: namespace: e2e-tests-statefulset-w44tl, resource: bindings, ignored listing per whitelist Jan 20 11:13:03.513: INFO: namespace e2e-tests-statefulset-w44tl deletion completed in 8.236259458s • [SLOW TEST:46.973 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:13:03.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-test-volume-map-d3c0c583-3b75-11ea-8bde-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 20 11:13:03.826: INFO: Waiting up to 5m0s for pod "pod-configmaps-d3c172d3-3b75-11ea-8bde-0242ac110005" in namespace "e2e-tests-configmap-svppk" to be "success or failure" Jan 20 11:13:03.835: INFO: Pod "pod-configmaps-d3c172d3-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.860854ms Jan 20 11:13:05.844: INFO: Pod "pod-configmaps-d3c172d3-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017490308s Jan 20 11:13:07.886: INFO: Pod "pod-configmaps-d3c172d3-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059385536s Jan 20 11:13:09.907: INFO: Pod "pod-configmaps-d3c172d3-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080915848s Jan 20 11:13:11.942: INFO: Pod "pod-configmaps-d3c172d3-3b75-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116013011s Jan 20 11:13:14.074: INFO: Pod "pod-configmaps-d3c172d3-3b75-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.248203709s STEP: Saw pod success Jan 20 11:13:14.075: INFO: Pod "pod-configmaps-d3c172d3-3b75-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:13:14.084: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-d3c172d3-3b75-11ea-8bde-0242ac110005 container configmap-volume-test: STEP: delete the pod Jan 20 11:13:14.240: INFO: Waiting for pod pod-configmaps-d3c172d3-3b75-11ea-8bde-0242ac110005 to disappear Jan 20 11:13:14.247: INFO: Pod pod-configmaps-d3c172d3-3b75-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:13:14.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-svppk" for this suite. Jan 20 11:13:20.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:13:20.759: INFO: namespace: e2e-tests-configmap-svppk, resource: bindings, ignored listing per whitelist Jan 20 11:13:20.759: INFO: namespace e2e-tests-configmap-svppk deletion completed in 6.50552925s • [SLOW TEST:17.245 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:13:20.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 20 11:13:20.993: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-92jph,SelfLink:/api/v1/namespaces/e2e-tests-watch-92jph/configmaps/e2e-watch-test-configmap-a,UID:ddfabc1b-3b75-11ea-a994-fa163e34d433,ResourceVersion:18841666,Generation:0,CreationTimestamp:2020-01-20 11:13:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 20 11:13:20.993: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-92jph,SelfLink:/api/v1/namespaces/e2e-tests-watch-92jph/configmaps/e2e-watch-test-configmap-a,UID:ddfabc1b-3b75-11ea-a994-fa163e34d433,ResourceVersion:18841666,Generation:0,CreationTimestamp:2020-01-20 11:13:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 20 11:13:31.020: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-92jph,SelfLink:/api/v1/namespaces/e2e-tests-watch-92jph/configmaps/e2e-watch-test-configmap-a,UID:ddfabc1b-3b75-11ea-a994-fa163e34d433,ResourceVersion:18841679,Generation:0,CreationTimestamp:2020-01-20 11:13:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 20 11:13:31.021: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-92jph,SelfLink:/api/v1/namespaces/e2e-tests-watch-92jph/configmaps/e2e-watch-test-configmap-a,UID:ddfabc1b-3b75-11ea-a994-fa163e34d433,ResourceVersion:18841679,Generation:0,CreationTimestamp:2020-01-20 11:13:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 20 11:13:41.043: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-92jph,SelfLink:/api/v1/namespaces/e2e-tests-watch-92jph/configmaps/e2e-watch-test-configmap-a,UID:ddfabc1b-3b75-11ea-a994-fa163e34d433,ResourceVersion:18841692,Generation:0,CreationTimestamp:2020-01-20 11:13:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 20 11:13:41.043: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-92jph,SelfLink:/api/v1/namespaces/e2e-tests-watch-92jph/configmaps/e2e-watch-test-configmap-a,UID:ddfabc1b-3b75-11ea-a994-fa163e34d433,ResourceVersion:18841692,Generation:0,CreationTimestamp:2020-01-20 11:13:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 20 11:13:51.069: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-92jph,SelfLink:/api/v1/namespaces/e2e-tests-watch-92jph/configmaps/e2e-watch-test-configmap-a,UID:ddfabc1b-3b75-11ea-a994-fa163e34d433,ResourceVersion:18841705,Generation:0,CreationTimestamp:2020-01-20 11:13:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 20 11:13:51.069: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-92jph,SelfLink:/api/v1/namespaces/e2e-tests-watch-92jph/configmaps/e2e-watch-test-configmap-a,UID:ddfabc1b-3b75-11ea-a994-fa163e34d433,ResourceVersion:18841705,Generation:0,CreationTimestamp:2020-01-20 11:13:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 20 11:14:01.096: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-92jph,SelfLink:/api/v1/namespaces/e2e-tests-watch-92jph/configmaps/e2e-watch-test-configmap-b,UID:f5e42a01-3b75-11ea-a994-fa163e34d433,ResourceVersion:18841717,Generation:0,CreationTimestamp:2020-01-20 11:14:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 20 11:14:01.096: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-92jph,SelfLink:/api/v1/namespaces/e2e-tests-watch-92jph/configmaps/e2e-watch-test-configmap-b,UID:f5e42a01-3b75-11ea-a994-fa163e34d433,ResourceVersion:18841717,Generation:0,CreationTimestamp:2020-01-20 11:14:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 20 11:14:11.124: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-92jph,SelfLink:/api/v1/namespaces/e2e-tests-watch-92jph/configmaps/e2e-watch-test-configmap-b,UID:f5e42a01-3b75-11ea-a994-fa163e34d433,ResourceVersion:18841730,Generation:0,CreationTimestamp:2020-01-20 11:14:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 20 11:14:11.124: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-92jph,SelfLink:/api/v1/namespaces/e2e-tests-watch-92jph/configmaps/e2e-watch-test-configmap-b,UID:f5e42a01-3b75-11ea-a994-fa163e34d433,ResourceVersion:18841730,Generation:0,CreationTimestamp:2020-01-20 11:14:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:14:21.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-watch-92jph" for this suite. Jan 20 11:14:27.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:14:27.305: INFO: namespace: e2e-tests-watch-92jph, resource: bindings, ignored listing per whitelist Jan 20 11:14:27.400: INFO: namespace e2e-tests-watch-92jph deletion completed in 6.250511883s • [SLOW TEST:66.641 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:14:27.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 20 11:14:38.295: INFO: Successfully updated pod "pod-update-05be7d6f-3b76-11ea-8bde-0242ac110005" STEP: verifying the updated pod is in kubernetes Jan 20 11:14:38.322: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:14:38.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-kfcbc" for this suite. Jan 20 11:15:02.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:15:02.728: INFO: namespace: e2e-tests-pods-kfcbc, resource: bindings, ignored listing per whitelist Jan 20 11:15:02.786: INFO: namespace e2e-tests-pods-kfcbc deletion completed in 24.451734703s • [SLOW TEST:35.386 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:15:02.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 20 11:15:03.041: INFO: Waiting up to 5m0s for pod "pod-1ad03a8d-3b76-11ea-8bde-0242ac110005" in namespace "e2e-tests-emptydir-lh68c" to be "success or failure" Jan 20 11:15:03.050: INFO: Pod "pod-1ad03a8d-3b76-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.319881ms Jan 20 11:15:05.114: INFO: Pod "pod-1ad03a8d-3b76-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07218167s Jan 20 11:15:07.152: INFO: Pod "pod-1ad03a8d-3b76-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110063561s Jan 20 11:15:09.168: INFO: Pod "pod-1ad03a8d-3b76-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126256861s Jan 20 11:15:11.184: INFO: Pod "pod-1ad03a8d-3b76-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.142013164s STEP: Saw pod success Jan 20 11:15:11.184: INFO: Pod "pod-1ad03a8d-3b76-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:15:11.195: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-1ad03a8d-3b76-11ea-8bde-0242ac110005 container test-container: STEP: delete the pod Jan 20 11:15:11.880: INFO: Waiting for pod pod-1ad03a8d-3b76-11ea-8bde-0242ac110005 to disappear Jan 20 11:15:12.110: INFO: Pod pod-1ad03a8d-3b76-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:15:12.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-lh68c" for this suite. Jan 20 11:15:18.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:15:18.213: INFO: namespace: e2e-tests-emptydir-lh68c, resource: bindings, ignored listing per whitelist Jan 20 11:15:18.372: INFO: namespace e2e-tests-emptydir-lh68c deletion completed in 6.252176485s • [SLOW TEST:15.585 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:15:18.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 20 11:15:18.843: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jan 20 11:15:18.862: INFO: Number of nodes with available pods: 0 Jan 20 11:15:18.862: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jan 20 11:15:18.976: INFO: Number of nodes with available pods: 0 Jan 20 11:15:18.976: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:19.994: INFO: Number of nodes with available pods: 0 Jan 20 11:15:19.994: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:21.056: INFO: Number of nodes with available pods: 0 Jan 20 11:15:21.056: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:21.989: INFO: Number of nodes with available pods: 0 Jan 20 11:15:21.989: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:23.006: INFO: Number of nodes with available pods: 0 Jan 20 11:15:23.006: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:24.162: INFO: Number of nodes with available pods: 0 Jan 20 11:15:24.162: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:25.139: INFO: Number of nodes with available pods: 0 Jan 20 11:15:25.139: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:26.008: INFO: Number of nodes with available pods: 0 Jan 20 11:15:26.008: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:26.990: INFO: Number of nodes with available pods: 1 Jan 20 11:15:26.990: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jan 20 11:15:27.155: INFO: Number of nodes with available pods: 1 Jan 20 11:15:27.155: INFO: Number of running nodes: 0, number of available pods: 1 Jan 20 11:15:28.193: INFO: Number of nodes with available pods: 0 Jan 20 11:15:28.193: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jan 20 11:15:28.263: INFO: Number of nodes with available pods: 0 Jan 20 11:15:28.263: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:30.051: INFO: Number of nodes with available pods: 0 Jan 20 11:15:30.051: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:30.337: INFO: Number of nodes with available pods: 0 Jan 20 11:15:30.337: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:31.273: INFO: Number of nodes with available pods: 0 Jan 20 11:15:31.273: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:32.277: INFO: Number of nodes with available pods: 0 Jan 20 11:15:32.277: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:33.318: INFO: Number of nodes with available pods: 0 Jan 20 11:15:33.318: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:34.274: INFO: Number of nodes with available pods: 0 Jan 20 11:15:34.274: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:35.284: INFO: Number of nodes with available pods: 0 Jan 20 11:15:35.284: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:36.286: INFO: Number of nodes with available pods: 0 Jan 20 11:15:36.286: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:37.282: INFO: Number of nodes with available pods: 0 Jan 20 11:15:37.282: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:38.283: INFO: Number of nodes with available pods: 0 Jan 20 11:15:38.283: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:39.284: INFO: Number of nodes with available pods: 0 Jan 20 11:15:39.284: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:40.275: INFO: Number of nodes with available pods: 0 Jan 20 11:15:40.276: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:41.284: INFO: Number of nodes with available pods: 0 Jan 20 11:15:41.284: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:42.281: INFO: Number of nodes with available pods: 0 Jan 20 11:15:42.281: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:43.278: INFO: Number of nodes with available pods: 0 Jan 20 11:15:43.278: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:44.283: INFO: Number of nodes with available pods: 0 Jan 20 11:15:44.283: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:45.277: INFO: Number of nodes with available pods: 0 Jan 20 11:15:45.277: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:46.296: INFO: Number of nodes with available pods: 0 Jan 20 11:15:46.296: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:47.277: INFO: Number of nodes with available pods: 0 Jan 20 11:15:47.277: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:48.279: INFO: Number of nodes with available pods: 0 Jan 20 11:15:48.279: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:49.276: INFO: Number of nodes with available pods: 0 Jan 20 11:15:49.277: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod Jan 20 11:15:50.282: INFO: Number of nodes with available pods: 1 Jan 20 11:15:50.282: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-4cgf9, will wait for the garbage collector to delete the pods Jan 20 11:15:50.381: INFO: Deleting DaemonSet.extensions daemon-set took: 29.928501ms Jan 20 11:15:50.583: INFO: Terminating DaemonSet.extensions daemon-set pods took: 201.219897ms Jan 20 11:15:57.274: INFO: Number of nodes with available pods: 0 Jan 20 11:15:57.274: INFO: Number of running nodes: 0, number of available pods: 0 Jan 20 11:15:57.286: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4cgf9/daemonsets","resourceVersion":"18841964"},"items":null} Jan 20 11:15:57.291: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4cgf9/pods","resourceVersion":"18841964"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:15:57.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-4cgf9" for this suite. Jan 20 11:16:03.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:16:03.678: INFO: namespace: e2e-tests-daemonsets-4cgf9, resource: bindings, ignored listing per whitelist Jan 20 11:16:03.745: INFO: namespace e2e-tests-daemonsets-4cgf9 deletion completed in 6.262129557s • [SLOW TEST:45.373 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:16:03.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jan 20 11:16:14.836: INFO: Successfully updated pod "pod-update-activedeadlineseconds-3f3e4b14-3b76-11ea-8bde-0242ac110005" Jan 20 11:16:14.836: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-3f3e4b14-3b76-11ea-8bde-0242ac110005" in namespace "e2e-tests-pods-46glr" to be "terminated due to deadline exceeded" Jan 20 11:16:15.167: INFO: Pod "pod-update-activedeadlineseconds-3f3e4b14-3b76-11ea-8bde-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 331.350583ms Jan 20 11:16:17.183: INFO: Pod "pod-update-activedeadlineseconds-3f3e4b14-3b76-11ea-8bde-0242ac110005": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.347204002s Jan 20 11:16:17.183: INFO: Pod "pod-update-activedeadlineseconds-3f3e4b14-3b76-11ea-8bde-0242ac110005" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:16:17.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-46glr" for this suite. Jan 20 11:16:25.212: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:16:25.354: INFO: namespace: e2e-tests-pods-46glr, resource: bindings, ignored listing per whitelist Jan 20 11:16:25.386: INFO: namespace e2e-tests-pods-46glr deletion completed in 8.195971235s • [SLOW TEST:21.641 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:16:25.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 20 11:16:25.659: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c04d913-3b76-11ea-8bde-0242ac110005" in namespace "e2e-tests-downward-api-f7z7f" to be "success or failure" Jan 20 11:16:25.690: INFO: Pod "downwardapi-volume-4c04d913-3b76-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 30.597379ms Jan 20 11:16:27.703: INFO: Pod "downwardapi-volume-4c04d913-3b76-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044040729s Jan 20 11:16:29.717: INFO: Pod "downwardapi-volume-4c04d913-3b76-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057626369s Jan 20 11:16:31.738: INFO: Pod "downwardapi-volume-4c04d913-3b76-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078532069s Jan 20 11:16:33.760: INFO: Pod "downwardapi-volume-4c04d913-3b76-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.101384675s STEP: Saw pod success Jan 20 11:16:33.761: INFO: Pod "downwardapi-volume-4c04d913-3b76-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:16:33.768: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-4c04d913-3b76-11ea-8bde-0242ac110005 container client-container: STEP: delete the pod Jan 20 11:16:33.893: INFO: Waiting for pod downwardapi-volume-4c04d913-3b76-11ea-8bde-0242ac110005 to disappear Jan 20 11:16:34.029: INFO: Pod downwardapi-volume-4c04d913-3b76-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:16:34.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-f7z7f" for this suite. Jan 20 11:16:40.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:16:40.201: INFO: namespace: e2e-tests-downward-api-f7z7f, resource: bindings, ignored listing per whitelist Jan 20 11:16:40.232: INFO: namespace e2e-tests-downward-api-f7z7f deletion completed in 6.186373538s • [SLOW TEST:14.846 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:16:40.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: getting the auto-created API token Jan 20 11:16:40.942: INFO: created pod pod-service-account-defaultsa Jan 20 11:16:40.943: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 20 11:16:40.957: INFO: created pod pod-service-account-mountsa Jan 20 11:16:40.957: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 20 11:16:40.986: INFO: created pod pod-service-account-nomountsa Jan 20 11:16:40.986: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 20 11:16:41.189: INFO: created pod pod-service-account-defaultsa-mountspec Jan 20 11:16:41.189: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 20 11:16:41.212: INFO: created pod pod-service-account-mountsa-mountspec Jan 20 11:16:41.213: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 20 11:16:41.234: INFO: created pod pod-service-account-nomountsa-mountspec Jan 20 11:16:41.234: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 20 11:16:41.382: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 20 11:16:41.382: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 20 11:16:41.405: INFO: created pod pod-service-account-mountsa-nomountspec Jan 20 11:16:41.405: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 20 11:16:41.451: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 20 11:16:41.451: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:16:41.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svcaccounts-jsbsq" for this suite. Jan 20 11:17:27.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:17:27.282: INFO: namespace: e2e-tests-svcaccounts-jsbsq, resource: bindings, ignored listing per whitelist Jan 20 11:17:27.434: INFO: namespace e2e-tests-svcaccounts-jsbsq deletion completed in 45.771064465s • [SLOW TEST:47.201 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:17:27.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 20 11:17:27.686: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 20 11:17:32.704: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 20 11:17:38.733: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 20 11:17:40.739: INFO: Creating deployment "test-rollover-deployment" Jan 20 11:17:40.812: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 20 11:17:42.829: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 20 11:17:42.844: INFO: Ensure that both replica sets have 1 created replica Jan 20 11:17:42.859: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 20 11:17:42.885: INFO: Updating deployment test-rollover-deployment Jan 20 11:17:42.885: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 20 11:17:44.946: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 20 11:17:44.957: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 20 11:17:44.963: INFO: all replica sets need to contain the pod-template-hash label Jan 20 11:17:44.963: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115861, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115861, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115863, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115860, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 11:17:47.005: INFO: all replica sets need to contain the pod-template-hash label Jan 20 11:17:47.005: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115861, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115861, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115863, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115860, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 11:17:49.486: INFO: all replica sets need to contain the pod-template-hash label Jan 20 11:17:49.486: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115861, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115861, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115863, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115860, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 11:17:51.015: INFO: all replica sets need to contain the pod-template-hash label Jan 20 11:17:51.016: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115861, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115861, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115863, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115860, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 11:17:52.995: INFO: all replica sets need to contain the pod-template-hash label Jan 20 11:17:52.995: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115861, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115861, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115872, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115860, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 11:17:54.987: INFO: all replica sets need to contain the pod-template-hash label Jan 20 11:17:54.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115861, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115861, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115872, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115860, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 11:17:57.021: INFO: all replica sets need to contain the pod-template-hash label Jan 20 11:17:57.021: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115861, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115861, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115872, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115860, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 11:17:59.000: INFO: all replica sets need to contain the pod-template-hash label Jan 20 11:17:59.000: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115861, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115861, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115872, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115860, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 11:18:01.004: INFO: all replica sets need to contain the pod-template-hash label Jan 20 11:18:01.004: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115861, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115861, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115872, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715115860, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 11:18:02.995: INFO: Jan 20 11:18:02.995: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 20 11:18:03.034: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-cgdsm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cgdsm/deployments/test-rollover-deployment,UID:78d20ec8-3b76-11ea-a994-fa163e34d433,ResourceVersion:18842356,Generation:2,CreationTimestamp:2020-01-20 11:17:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-20 11:17:41 +0000 UTC 2020-01-20 11:17:41 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-20 11:18:02 +0000 UTC 2020-01-20 11:17:40 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 20 11:18:03.039: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-cgdsm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cgdsm/replicasets/test-rollover-deployment-5b8479fdb6,UID:7a19b574-3b76-11ea-a994-fa163e34d433,ResourceVersion:18842347,Generation:2,CreationTimestamp:2020-01-20 11:17:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 78d20ec8-3b76-11ea-a994-fa163e34d433 0xc000ed36e7 0xc000ed36e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 20 11:18:03.039: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 20 11:18:03.040: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-cgdsm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cgdsm/replicasets/test-rollover-controller,UID:70fe2f56-3b76-11ea-a994-fa163e34d433,ResourceVersion:18842355,Generation:2,CreationTimestamp:2020-01-20 11:17:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 78d20ec8-3b76-11ea-a994-fa163e34d433 0xc000ed353f 0xc000ed3550}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 20 11:18:03.040: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-cgdsm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-cgdsm/replicasets/test-rollover-deployment-58494b7559,UID:78e00f8b-3b76-11ea-a994-fa163e34d433,ResourceVersion:18842315,Generation:2,CreationTimestamp:2020-01-20 11:17:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 78d20ec8-3b76-11ea-a994-fa163e34d433 0xc000ed3617 0xc000ed3618}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 20 11:18:03.045: INFO: Pod "test-rollover-deployment-5b8479fdb6-qm2np" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-qm2np,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-cgdsm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-cgdsm/pods/test-rollover-deployment-5b8479fdb6-qm2np,UID:7a5a967b-3b76-11ea-a994-fa163e34d433,ResourceVersion:18842332,Generation:0,CreationTimestamp:2020-01-20 11:17:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 7a19b574-3b76-11ea-a994-fa163e34d433 0xc001544957 0xc001544958}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-9ddxp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9ddxp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-9ddxp true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0015449c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0015449e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:17:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:17:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:17:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:17:43 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-20 11:17:43 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-20 11:17:51 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://045c041870e33b8cea2e098e4828f49d65a0d02d36e9d1b6b470ebc114f9b4a6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:18:03.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-cgdsm" for this suite. Jan 20 11:18:11.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:18:11.359: INFO: namespace: e2e-tests-deployment-cgdsm, resource: bindings, ignored listing per whitelist Jan 20 11:18:11.483: INFO: namespace e2e-tests-deployment-cgdsm deletion completed in 8.413926603s • [SLOW TEST:44.050 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:18:11.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-fzflh Jan 20 11:18:20.069: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-fzflh STEP: checking the pod's current state and verifying that restartCount is present Jan 20 11:18:20.083: INFO: Initial restart count of pod liveness-exec is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:22:21.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-fzflh" for this suite. Jan 20 11:22:27.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:22:27.657: INFO: namespace: e2e-tests-container-probe-fzflh, resource: bindings, ignored listing per whitelist Jan 20 11:22:27.751: INFO: namespace e2e-tests-container-probe-fzflh deletion completed in 6.370868838s • [SLOW TEST:256.267 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:22:27.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 20 11:22:28.005: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2406a1f7-3b77-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-l8bns" to be "success or failure" Jan 20 11:22:28.021: INFO: Pod "downwardapi-volume-2406a1f7-3b77-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.484937ms Jan 20 11:22:30.042: INFO: Pod "downwardapi-volume-2406a1f7-3b77-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036514216s Jan 20 11:22:32.066: INFO: Pod "downwardapi-volume-2406a1f7-3b77-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060539599s Jan 20 11:22:34.077: INFO: Pod "downwardapi-volume-2406a1f7-3b77-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071971369s Jan 20 11:22:36.211: INFO: Pod "downwardapi-volume-2406a1f7-3b77-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.205994934s Jan 20 11:22:38.225: INFO: Pod "downwardapi-volume-2406a1f7-3b77-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.219811511s STEP: Saw pod success Jan 20 11:22:38.225: INFO: Pod "downwardapi-volume-2406a1f7-3b77-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:22:38.231: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-2406a1f7-3b77-11ea-8bde-0242ac110005 container client-container: STEP: delete the pod Jan 20 11:22:38.294: INFO: Waiting for pod downwardapi-volume-2406a1f7-3b77-11ea-8bde-0242ac110005 to disappear Jan 20 11:22:38.304: INFO: Pod downwardapi-volume-2406a1f7-3b77-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:22:38.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-l8bns" for this suite. Jan 20 11:22:44.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:22:44.534: INFO: namespace: e2e-tests-projected-l8bns, resource: bindings, ignored listing per whitelist Jan 20 11:22:44.609: INFO: namespace e2e-tests-projected-l8bns deletion completed in 6.292586108s • [SLOW TEST:16.858 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:22:44.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 20 11:22:44.977: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"2e0def14-3b77-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00188bb42), BlockOwnerDeletion:(*bool)(0xc00188bb43)}} Jan 20 11:22:45.144: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"2e07f13c-3b77-11ea-a994-fa163e34d433", Controller:(*bool)(0xc00188bd02), BlockOwnerDeletion:(*bool)(0xc00188bd03)}} Jan 20 11:22:45.168: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"2e099e68-3b77-11ea-a994-fa163e34d433", Controller:(*bool)(0xc001eab97a), BlockOwnerDeletion:(*bool)(0xc001eab97b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:22:50.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-bbgpl" for this suite. Jan 20 11:22:56.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:22:56.454: INFO: namespace: e2e-tests-gc-bbgpl, resource: bindings, ignored listing per whitelist Jan 20 11:22:56.517: INFO: namespace e2e-tests-gc-bbgpl deletion completed in 6.297868137s • [SLOW TEST:11.908 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:22:56.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 20 11:22:56.740: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:23:06.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-pods-wllnm" for this suite. Jan 20 11:23:55.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:23:55.147: INFO: namespace: e2e-tests-pods-wllnm, resource: bindings, ignored listing per whitelist Jan 20 11:23:55.274: INFO: namespace e2e-tests-pods-wllnm deletion completed in 48.302889758s • [SLOW TEST:58.756 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:23:55.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating service multi-endpoint-test in namespace e2e-tests-services-zkhjp STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zkhjp to expose endpoints map[] Jan 20 11:23:55.633: INFO: Get endpoints failed (17.003921ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 20 11:23:56.652: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zkhjp exposes endpoints map[] (1.035899206s elapsed) STEP: Creating pod pod1 in namespace e2e-tests-services-zkhjp STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zkhjp to expose endpoints map[pod1:[100]] Jan 20 11:24:00.948: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.264316465s elapsed, will retry) Jan 20 11:24:05.066: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zkhjp exposes endpoints map[pod1:[100]] (8.382223218s elapsed) STEP: Creating pod pod2 in namespace e2e-tests-services-zkhjp STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zkhjp to expose endpoints map[pod1:[100] pod2:[101]] Jan 20 11:24:10.944: INFO: Unexpected endpoints: found map[58e2ecb9-3b77-11ea-a994-fa163e34d433:[100]], expected map[pod1:[100] pod2:[101]] (5.850435139s elapsed, will retry) Jan 20 11:24:15.048: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zkhjp exposes endpoints map[pod1:[100] pod2:[101]] (9.954164004s elapsed) STEP: Deleting pod pod1 in namespace e2e-tests-services-zkhjp STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zkhjp to expose endpoints map[pod2:[101]] Jan 20 11:24:16.236: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zkhjp exposes endpoints map[pod2:[101]] (1.14152129s elapsed) STEP: Deleting pod pod2 in namespace e2e-tests-services-zkhjp STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-zkhjp to expose endpoints map[] Jan 20 11:24:17.893: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-zkhjp exposes endpoints map[] (1.642654133s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:24:18.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-services-zkhjp" for this suite. Jan 20 11:24:42.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:24:42.707: INFO: namespace: e2e-tests-services-zkhjp, resource: bindings, ignored listing per whitelist Jan 20 11:24:42.795: INFO: namespace e2e-tests-services-zkhjp deletion completed in 24.549202229s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90 • [SLOW TEST:47.521 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:24:42.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:24:51.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-8lrr6" for this suite. Jan 20 11:25:45.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:25:45.194: INFO: namespace: e2e-tests-kubelet-test-8lrr6, resource: bindings, ignored listing per whitelist Jan 20 11:25:45.405: INFO: namespace e2e-tests-kubelet-test-8lrr6 deletion completed in 54.293412956s • [SLOW TEST:62.610 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:25:45.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:25:45.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubelet-test-xbwz4" for this suite. Jan 20 11:26:08.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:26:08.202: INFO: namespace: e2e-tests-kubelet-test-xbwz4, resource: bindings, ignored listing per whitelist Jan 20 11:26:08.220: INFO: namespace e2e-tests-kubelet-test-xbwz4 deletion completed in 22.206532773s • [SLOW TEST:22.814 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:26:08.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 20 11:26:08.602: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 20 11:26:13.635: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 20 11:26:18.051: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 20 11:26:18.110: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-5csbk,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-5csbk/deployments/test-cleanup-deployment,UID:ad2b46f3-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843195,Generation:1,CreationTimestamp:2020-01-20 11:26:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jan 20 11:26:18.167: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:26:18.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-5csbk" for this suite. Jan 20 11:26:26.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:26:26.517: INFO: namespace: e2e-tests-deployment-5csbk, resource: bindings, ignored listing per whitelist Jan 20 11:26:26.635: INFO: namespace e2e-tests-deployment-5csbk deletion completed in 8.432627416s • [SLOW TEST:18.415 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:26:26.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 20 11:26:27.639: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b2dd6a75-3b77-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-l67pk" to be "success or failure" Jan 20 11:26:27.847: INFO: Pod "downwardapi-volume-b2dd6a75-3b77-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 208.139946ms Jan 20 11:26:29.871: INFO: Pod "downwardapi-volume-b2dd6a75-3b77-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231486545s Jan 20 11:26:31.911: INFO: Pod "downwardapi-volume-b2dd6a75-3b77-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.271752779s Jan 20 11:26:33.941: INFO: Pod "downwardapi-volume-b2dd6a75-3b77-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.301738636s Jan 20 11:26:35.961: INFO: Pod "downwardapi-volume-b2dd6a75-3b77-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.321919701s Jan 20 11:26:37.974: INFO: Pod "downwardapi-volume-b2dd6a75-3b77-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.33518992s STEP: Saw pod success Jan 20 11:26:37.974: INFO: Pod "downwardapi-volume-b2dd6a75-3b77-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:26:37.982: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b2dd6a75-3b77-11ea-8bde-0242ac110005 container client-container: STEP: delete the pod Jan 20 11:26:38.299: INFO: Waiting for pod downwardapi-volume-b2dd6a75-3b77-11ea-8bde-0242ac110005 to disappear Jan 20 11:26:38.319: INFO: Pod downwardapi-volume-b2dd6a75-3b77-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:26:38.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-l67pk" for this suite. Jan 20 11:26:44.398: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:26:44.725: INFO: namespace: e2e-tests-projected-l67pk, resource: bindings, ignored listing per whitelist Jan 20 11:26:44.734: INFO: namespace e2e-tests-projected-l67pk deletion completed in 6.400023881s • [SLOW TEST:18.098 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:26:44.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating the pod Jan 20 11:26:45.219: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:27:01.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-init-container-4jf9v" for this suite. Jan 20 11:27:07.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:27:08.000: INFO: namespace: e2e-tests-init-container-4jf9v, resource: bindings, ignored listing per whitelist Jan 20 11:27:08.222: INFO: namespace e2e-tests-init-container-4jf9v deletion completed in 6.431708118s • [SLOW TEST:23.488 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:27:08.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 20 11:27:08.524: INFO: Creating deployment "nginx-deployment" Jan 20 11:27:08.659: INFO: Waiting for observed generation 1 Jan 20 11:27:11.715: INFO: Waiting for all required pods to come up Jan 20 11:27:12.303: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jan 20 11:27:49.699: INFO: Waiting for deployment "nginx-deployment" to complete Jan 20 11:27:49.734: INFO: Updating deployment "nginx-deployment" with a non-existent image Jan 20 11:27:49.784: INFO: Updating deployment nginx-deployment Jan 20 11:27:49.784: INFO: Waiting for observed generation 2 Jan 20 11:27:51.955: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 20 11:27:51.971: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 20 11:27:51.979: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 20 11:27:53.056: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 20 11:27:53.056: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 20 11:27:53.084: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jan 20 11:27:53.347: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jan 20 11:27:53.347: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jan 20 11:27:53.673: INFO: Updating deployment nginx-deployment Jan 20 11:27:53.673: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jan 20 11:27:53.707: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 20 11:27:54.757: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 Jan 20 11:27:55.824: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vl2tg/deployments/nginx-deployment,UID:cb3f7e26-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843601,Generation:3,CreationTimestamp:2020-01-20 11:27:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-20 11:27:50 +0000 UTC 2020-01-20 11:27:08 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-01-20 11:27:54 +0000 UTC 2020-01-20 11:27:54 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Jan 20 11:27:56.396: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vl2tg/replicasets/nginx-deployment-5c98f8fb5,UID:e3d543e7-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843595,Generation:3,CreationTimestamp:2020-01-20 11:27:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment cb3f7e26-3b77-11ea-a994-fa163e34d433 0xc0017bf337 0xc0017bf338}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 20 11:27:56.396: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jan 20 11:27:56.396: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-vl2tg/replicasets/nginx-deployment-85ddf47c5d,UID:cb55ed6d-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843594,Generation:3,CreationTimestamp:2020-01-20 11:27:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment cb3f7e26-3b77-11ea-a994-fa163e34d433 0xc0017bf477 0xc0017bf478}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jan 20 11:27:57.137: INFO: Pod "nginx-deployment-5c98f8fb5-9p6lr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-9p6lr,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-5c98f8fb5-9p6lr,UID:e7cd376a-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843626,Generation:0,CreationTimestamp:2020-01-20 11:27:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e3d543e7-3b77-11ea-a994-fa163e34d433 0xc001468d77 0xc001468d78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001468de0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001468e00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.137: INFO: Pod "nginx-deployment-5c98f8fb5-fgg45" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-fgg45,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-5c98f8fb5-fgg45,UID:e3dfd74b-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843577,Generation:0,CreationTimestamp:2020-01-20 11:27:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e3d543e7-3b77-11ea-a994-fa163e34d433 0xc001468e60 0xc001468e61}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001468f00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001468f20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:49 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-20 11:27:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.138: INFO: Pod "nginx-deployment-5c98f8fb5-g4q67" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-g4q67,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-5c98f8fb5-g4q67,UID:e3da644e-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843563,Generation:0,CreationTimestamp:2020-01-20 11:27:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e3d543e7-3b77-11ea-a994-fa163e34d433 0xc001468fe7 0xc001468fe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001469050} {node.kubernetes.io/unreachable Exists NoExecute 0xc001469070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:49 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-20 11:27:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.138: INFO: Pod "nginx-deployment-5c98f8fb5-gxtvm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-gxtvm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-5c98f8fb5-gxtvm,UID:e412062b-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843589,Generation:0,CreationTimestamp:2020-01-20 11:27:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e3d543e7-3b77-11ea-a994-fa163e34d433 0xc001469137 0xc001469138}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001469220} {node.kubernetes.io/unreachable Exists NoExecute 0xc001469240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:50 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-20 11:27:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.139: INFO: Pod "nginx-deployment-5c98f8fb5-p699f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-p699f,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-5c98f8fb5-p699f,UID:e781e7c0-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843627,Generation:0,CreationTimestamp:2020-01-20 11:27:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e3d543e7-3b77-11ea-a994-fa163e34d433 0xc001469307 0xc001469308}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001469370} {node.kubernetes.io/unreachable Exists NoExecute 0xc001469390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.139: INFO: Pod "nginx-deployment-5c98f8fb5-v2cg6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-v2cg6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-5c98f8fb5-v2cg6,UID:e3e09ebc-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843585,Generation:0,CreationTimestamp:2020-01-20 11:27:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e3d543e7-3b77-11ea-a994-fa163e34d433 0xc001469477 0xc001469478}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014694e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001469500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:49 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-20 11:27:50 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.139: INFO: Pod "nginx-deployment-5c98f8fb5-wqtbj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wqtbj,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-5c98f8fb5-wqtbj,UID:e4183661-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843618,Generation:0,CreationTimestamp:2020-01-20 11:27:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e3d543e7-3b77-11ea-a994-fa163e34d433 0xc001469647 0xc001469648}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014696d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0014696f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:50 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:,StartTime:2020-01-20 11:27:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.139: INFO: Pod "nginx-deployment-5c98f8fb5-zsmnx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-zsmnx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-5c98f8fb5-zsmnx,UID:e7cdace5-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843621,Generation:0,CreationTimestamp:2020-01-20 11:27:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 e3d543e7-3b77-11ea-a994-fa163e34d433 0xc001469877 0xc001469878}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0014698e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001469900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.140: INFO: Pod "nginx-deployment-85ddf47c5d-46xdv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-46xdv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-85ddf47c5d-46xdv,UID:e7cf38f1-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843631,Generation:0,CreationTimestamp:2020-01-20 11:27:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cb55ed6d-3b77-11ea-a994-fa163e34d433 0xc001469970 0xc001469971}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001469a00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001469a20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.140: INFO: Pod "nginx-deployment-85ddf47c5d-966tw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-966tw,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-85ddf47c5d-966tw,UID:e76e5544-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843614,Generation:0,CreationTimestamp:2020-01-20 11:27:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cb55ed6d-3b77-11ea-a994-fa163e34d433 0xc001469a80 0xc001469a81}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001469ae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001469b00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.140: INFO: Pod "nginx-deployment-85ddf47c5d-9m6gx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9m6gx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-85ddf47c5d-9m6gx,UID:e7349cbd-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843610,Generation:0,CreationTimestamp:2020-01-20 11:27:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cb55ed6d-3b77-11ea-a994-fa163e34d433 0xc001469b87 0xc001469b88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001469bf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001469c10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.141: INFO: Pod "nginx-deployment-85ddf47c5d-9wkt2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9wkt2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-85ddf47c5d-9wkt2,UID:cb982ae9-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843506,Generation:0,CreationTimestamp:2020-01-20 11:27:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cb55ed6d-3b77-11ea-a994-fa163e34d433 0xc001469c87 0xc001469c88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001469cf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001469d10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.12,StartTime:2020-01-20 11:27:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-20 11:27:42 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e483357013a130566e24806b1c6d1e3cfce1bbc9b11daa9106f868ce441e9451}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.141: INFO: Pod "nginx-deployment-85ddf47c5d-csfds" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-csfds,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-85ddf47c5d-csfds,UID:e7cf78e7-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843629,Generation:0,CreationTimestamp:2020-01-20 11:27:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cb55ed6d-3b77-11ea-a994-fa163e34d433 0xc001469e47 0xc001469e48}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001469eb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001469ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.141: INFO: Pod "nginx-deployment-85ddf47c5d-dgmt6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-dgmt6,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-85ddf47c5d-dgmt6,UID:e7cf5004-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843628,Generation:0,CreationTimestamp:2020-01-20 11:27:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cb55ed6d-3b77-11ea-a994-fa163e34d433 0xc001469f60 0xc001469f61}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001469fc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001469fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.142: INFO: Pod "nginx-deployment-85ddf47c5d-fvkvt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fvkvt,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-85ddf47c5d-fvkvt,UID:e77cbd41-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843624,Generation:0,CreationTimestamp:2020-01-20 11:27:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cb55ed6d-3b77-11ea-a994-fa163e34d433 0xc001a14040 0xc001a14041}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a140a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a140c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.142: INFO: Pod "nginx-deployment-85ddf47c5d-fz9tz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-fz9tz,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-85ddf47c5d-fz9tz,UID:e77ca231-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843622,Generation:0,CreationTimestamp:2020-01-20 11:27:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cb55ed6d-3b77-11ea-a994-fa163e34d433 0xc001a14137 0xc001a14138}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a141a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a141c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.142: INFO: Pod "nginx-deployment-85ddf47c5d-gw948" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-gw948,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-85ddf47c5d-gw948,UID:e7cf759f-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843632,Generation:0,CreationTimestamp:2020-01-20 11:27:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cb55ed6d-3b77-11ea-a994-fa163e34d433 0xc001a14237 0xc001a14238}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a142a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a142c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.142: INFO: Pod "nginx-deployment-85ddf47c5d-h6m7d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-h6m7d,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-85ddf47c5d-h6m7d,UID:e77caf0a-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843625,Generation:0,CreationTimestamp:2020-01-20 11:27:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cb55ed6d-3b77-11ea-a994-fa163e34d433 0xc001a14320 0xc001a14321}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a14380} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a143a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.143: INFO: Pod "nginx-deployment-85ddf47c5d-hcjmv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-hcjmv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-85ddf47c5d-hcjmv,UID:cb7cace6-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843519,Generation:0,CreationTimestamp:2020-01-20 11:27:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cb55ed6d-3b77-11ea-a994-fa163e34d433 0xc001a14417 0xc001a14418}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a14480} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a144a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.5,StartTime:2020-01-20 11:27:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-20 11:27:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d6539a591b314344c3bbfdbb735fe2cf78062d3429efc31829db84337a507e39}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.143: INFO: Pod "nginx-deployment-85ddf47c5d-jr4rq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jr4rq,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-85ddf47c5d-jr4rq,UID:e7cf5908-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843630,Generation:0,CreationTimestamp:2020-01-20 11:27:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cb55ed6d-3b77-11ea-a994-fa163e34d433 0xc001a14567 0xc001a14568}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a145d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a145f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.143: INFO: Pod "nginx-deployment-85ddf47c5d-p5dg4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-p5dg4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-85ddf47c5d-p5dg4,UID:e77cd56f-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843623,Generation:0,CreationTimestamp:2020-01-20 11:27:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cb55ed6d-3b77-11ea-a994-fa163e34d433 0xc001a14650 0xc001a14651}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a146b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a146d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:56 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.143: INFO: Pod "nginx-deployment-85ddf47c5d-px8k8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-px8k8,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-85ddf47c5d-px8k8,UID:cb6e02bf-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843524,Generation:0,CreationTimestamp:2020-01-20 11:27:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cb55ed6d-3b77-11ea-a994-fa163e34d433 0xc001a14747 0xc001a14748}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a147b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a147d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-20 11:27:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-20 11:27:39 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://097eb48b93d617976b272c1cc484ae36b6560b26e9e896b093ab4d6089565241}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.144: INFO: Pod "nginx-deployment-85ddf47c5d-qw5cm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-qw5cm,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-85ddf47c5d-qw5cm,UID:cb7c6fef-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843508,Generation:0,CreationTimestamp:2020-01-20 11:27:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cb55ed6d-3b77-11ea-a994-fa163e34d433 0xc001a14897 0xc001a14898}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a14900} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a14920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.11,StartTime:2020-01-20 11:27:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-20 11:27:42 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b625dfaf7e066111e84bde78f926b8a881610bfa1098437092189cdd678cc62a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.144: INFO: Pod "nginx-deployment-85ddf47c5d-s9bjr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-s9bjr,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-85ddf47c5d-s9bjr,UID:cb735e24-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843494,Generation:0,CreationTimestamp:2020-01-20 11:27:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cb55ed6d-3b77-11ea-a994-fa163e34d433 0xc001a149e7 0xc001a149e8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a14a50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a14a70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.6,StartTime:2020-01-20 11:27:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-20 11:27:41 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://01ba7193b396682b904376dcd4030a2cb41c2857f3bf936a3430de31daf2c676}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.145: INFO: Pod "nginx-deployment-85ddf47c5d-stjlv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-stjlv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-85ddf47c5d-stjlv,UID:e76dfbdf-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843616,Generation:0,CreationTimestamp:2020-01-20 11:27:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cb55ed6d-3b77-11ea-a994-fa163e34d433 0xc001a14b37 0xc001a14b38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a14ba0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a14bc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.145: INFO: Pod "nginx-deployment-85ddf47c5d-td68q" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-td68q,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-85ddf47c5d-td68q,UID:cb7c6920-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843532,Generation:0,CreationTimestamp:2020-01-20 11:27:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cb55ed6d-3b77-11ea-a994-fa163e34d433 0xc001a14c37 0xc001a14c38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a14ca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a14cc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.9,StartTime:2020-01-20 11:27:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-20 11:27:42 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d5196896d47c7e2c7a2e1da21e6b2c2efbd9186e7cb14403d413b615b98b0e92}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.145: INFO: Pod "nginx-deployment-85ddf47c5d-tdjdx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tdjdx,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-85ddf47c5d-tdjdx,UID:cb73e71e-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843527,Generation:0,CreationTimestamp:2020-01-20 11:27:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cb55ed6d-3b77-11ea-a994-fa163e34d433 0xc001a14d87 0xc001a14d88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a14df0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a14e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:08 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.8,StartTime:2020-01-20 11:27:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-20 11:27:41 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://11d0f73f5379f637327a42ea3143e5219fb3a213069de80120f467cd12f77a92}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jan 20 11:27:57.146: INFO: Pod "nginx-deployment-85ddf47c5d-wp9tc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wp9tc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-vl2tg,SelfLink:/api/v1/namespaces/e2e-tests-deployment-vl2tg/pods/nginx-deployment-85ddf47c5d-wp9tc,UID:cb7c80e0-3b77-11ea-a994-fa163e34d433,ResourceVersion:18843535,Generation:0,CreationTimestamp:2020-01-20 11:27:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d cb55ed6d-3b77-11ea-a994-fa163e34d433 0xc001a14ed7 0xc001a14ed8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-qnx5z {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qnx5z,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-qnx5z true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001a14f50} {node.kubernetes.io/unreachable Exists NoExecute 0xc001a14f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:09 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:27:09 +0000 UTC }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.7,StartTime:2020-01-20 11:27:09 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-20 11:27:41 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://48f0d85015fa8dba96633892b4d66d480bcb15b74834d9ce401a71ea1cbb7435}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:27:57.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-deployment-vl2tg" for this suite. Jan 20 11:29:15.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:29:17.039: INFO: namespace: e2e-tests-deployment-vl2tg, resource: bindings, ignored listing per whitelist Jan 20 11:29:17.099: INFO: namespace e2e-tests-deployment-vl2tg deletion completed in 1m19.516653885s • [SLOW TEST:128.877 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:29:17.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 STEP: creating an rc Jan 20 11:29:19.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gqplm' Jan 20 11:29:22.360: INFO: stderr: "" Jan 20 11:29:22.360: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Waiting for Redis master to start. Jan 20 11:29:23.374: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:29:23.374: INFO: Found 0 / 1 Jan 20 11:29:24.380: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:29:24.380: INFO: Found 0 / 1 Jan 20 11:29:25.768: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:29:25.768: INFO: Found 0 / 1 Jan 20 11:29:26.381: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:29:26.381: INFO: Found 0 / 1 Jan 20 11:29:27.382: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:29:27.382: INFO: Found 0 / 1 Jan 20 11:29:28.609: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:29:28.609: INFO: Found 0 / 1 Jan 20 11:29:29.635: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:29:29.635: INFO: Found 0 / 1 Jan 20 11:29:30.375: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:29:30.375: INFO: Found 0 / 1 Jan 20 11:29:31.384: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:29:31.385: INFO: Found 0 / 1 Jan 20 11:29:32.375: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:29:32.375: INFO: Found 0 / 1 Jan 20 11:29:33.378: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:29:33.378: INFO: Found 1 / 1 Jan 20 11:29:33.378: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 20 11:29:33.386: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:29:33.386: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jan 20 11:29:33.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-ztghk redis-master --namespace=e2e-tests-kubectl-gqplm' Jan 20 11:29:33.622: INFO: stderr: "" Jan 20 11:29:33.623: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 Jan 11:29:31.541 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Jan 11:29:31.541 # Server started, Redis version 3.2.12\n1:M 20 Jan 11:29:31.542 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Jan 11:29:31.542 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jan 20 11:29:33.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-ztghk redis-master --namespace=e2e-tests-kubectl-gqplm --tail=1' Jan 20 11:29:33.880: INFO: stderr: "" Jan 20 11:29:33.880: INFO: stdout: "1:M 20 Jan 11:29:31.542 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jan 20 11:29:33.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-ztghk redis-master --namespace=e2e-tests-kubectl-gqplm --limit-bytes=1' Jan 20 11:29:34.271: INFO: stderr: "" Jan 20 11:29:34.271: INFO: stdout: " " STEP: exposing timestamps Jan 20 11:29:34.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-ztghk redis-master --namespace=e2e-tests-kubectl-gqplm --tail=1 --timestamps' Jan 20 11:29:34.415: INFO: stderr: "" Jan 20 11:29:34.415: INFO: stdout: "2020-01-20T11:29:31.543677792Z 1:M 20 Jan 11:29:31.542 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jan 20 11:29:36.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-ztghk redis-master --namespace=e2e-tests-kubectl-gqplm --since=1s' Jan 20 11:29:37.125: INFO: stderr: "" Jan 20 11:29:37.125: INFO: stdout: "" Jan 20 11:29:37.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-ztghk redis-master --namespace=e2e-tests-kubectl-gqplm --since=24h' Jan 20 11:29:37.275: INFO: stderr: "" Jan 20 11:29:37.276: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 Jan 11:29:31.541 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Jan 11:29:31.541 # Server started, Redis version 3.2.12\n1:M 20 Jan 11:29:31.542 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Jan 11:29:31.542 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 STEP: using delete to clean up resources Jan 20 11:29:37.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-gqplm' Jan 20 11:29:37.414: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 20 11:29:37.414: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jan 20 11:29:37.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-gqplm' Jan 20 11:29:37.558: INFO: stderr: "No resources found.\n" Jan 20 11:29:37.558: INFO: stdout: "" Jan 20 11:29:37.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-gqplm -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 20 11:29:37.823: INFO: stderr: "" Jan 20 11:29:37.824: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:29:37.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-gqplm" for this suite. Jan 20 11:29:44.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:29:45.007: INFO: namespace: e2e-tests-kubectl-gqplm, resource: bindings, ignored listing per whitelist Jan 20 11:29:45.080: INFO: namespace e2e-tests-kubectl-gqplm deletion completed in 7.223928652s • [SLOW TEST:27.980 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:29:45.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 20 11:29:45.409: INFO: Waiting up to 5m0s for pod "downwardapi-volume-28ba34db-3b78-11ea-8bde-0242ac110005" in namespace "e2e-tests-downward-api-l2m7g" to be "success or failure" Jan 20 11:29:45.448: INFO: Pod "downwardapi-volume-28ba34db-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 39.052771ms Jan 20 11:29:47.463: INFO: Pod "downwardapi-volume-28ba34db-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054429682s Jan 20 11:29:49.487: INFO: Pod "downwardapi-volume-28ba34db-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077618206s Jan 20 11:29:51.543: INFO: Pod "downwardapi-volume-28ba34db-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134428733s Jan 20 11:29:53.556: INFO: Pod "downwardapi-volume-28ba34db-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147009593s Jan 20 11:29:55.620: INFO: Pod "downwardapi-volume-28ba34db-3b78-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.210564073s STEP: Saw pod success Jan 20 11:29:55.620: INFO: Pod "downwardapi-volume-28ba34db-3b78-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:29:55.627: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-28ba34db-3b78-11ea-8bde-0242ac110005 container client-container: STEP: delete the pod Jan 20 11:29:55.908: INFO: Waiting for pod downwardapi-volume-28ba34db-3b78-11ea-8bde-0242ac110005 to disappear Jan 20 11:29:55.950: INFO: Pod downwardapi-volume-28ba34db-3b78-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:29:55.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-l2m7g" for this suite. Jan 20 11:30:02.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:30:02.199: INFO: namespace: e2e-tests-downward-api-l2m7g, resource: bindings, ignored listing per whitelist Jan 20 11:30:02.218: INFO: namespace e2e-tests-downward-api-l2m7g deletion completed in 6.15994627s • [SLOW TEST:17.138 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:30:02.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test override command Jan 20 11:30:02.421: INFO: Waiting up to 5m0s for pod "client-containers-32d9d108-3b78-11ea-8bde-0242ac110005" in namespace "e2e-tests-containers-hqd9p" to be "success or failure" Jan 20 11:30:02.466: INFO: Pod "client-containers-32d9d108-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 44.489842ms Jan 20 11:30:04.516: INFO: Pod "client-containers-32d9d108-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09445743s Jan 20 11:30:06.588: INFO: Pod "client-containers-32d9d108-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166951843s Jan 20 11:30:08.608: INFO: Pod "client-containers-32d9d108-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.186859861s Jan 20 11:30:10.696: INFO: Pod "client-containers-32d9d108-3b78-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.275117275s STEP: Saw pod success Jan 20 11:30:10.696: INFO: Pod "client-containers-32d9d108-3b78-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:30:10.703: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-32d9d108-3b78-11ea-8bde-0242ac110005 container test-container: STEP: delete the pod Jan 20 11:30:10.772: INFO: Waiting for pod client-containers-32d9d108-3b78-11ea-8bde-0242ac110005 to disappear Jan 20 11:30:10.782: INFO: Pod client-containers-32d9d108-3b78-11ea-8bde-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:30:10.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-hqd9p" for this suite. Jan 20 11:30:16.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:30:17.111: INFO: namespace: e2e-tests-containers-hqd9p, resource: bindings, ignored listing per whitelist Jan 20 11:30:17.118: INFO: namespace e2e-tests-containers-hqd9p deletion completed in 6.328344025s • [SLOW TEST:14.899 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:30:17.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 20 11:30:17.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jan 20 11:30:17.467: INFO: stderr: "" Jan 20 11:30:17.467: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.8\", GitCommit:\"0c6d31a99f81476dfc9871ba3cf3f597bec29b58\", GitTreeState:\"clean\", BuildDate:\"2019-07-08T08:38:54Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:30:17.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-64c4z" for this suite. Jan 20 11:30:23.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:30:23.735: INFO: namespace: e2e-tests-kubectl-64c4z, resource: bindings, ignored listing per whitelist Jan 20 11:30:23.747: INFO: namespace e2e-tests-kubectl-64c4z deletion completed in 6.27139919s • [SLOW TEST:6.629 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:30:23.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0120 11:30:38.429665 8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 20 11:30:38.429: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:30:38.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-w45b8" for this suite. Jan 20 11:31:00.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:31:00.722: INFO: namespace: e2e-tests-gc-w45b8, resource: bindings, ignored listing per whitelist Jan 20 11:31:00.816: INFO: namespace e2e-tests-gc-w45b8 deletion completed in 22.382659622s • [SLOW TEST:37.068 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:31:00.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 20 11:31:01.146: INFO: Waiting up to 5m0s for pod "pod-55df35c3-3b78-11ea-8bde-0242ac110005" in namespace "e2e-tests-emptydir-5qpld" to be "success or failure" Jan 20 11:31:01.167: INFO: Pod "pod-55df35c3-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 21.348515ms Jan 20 11:31:04.718: INFO: Pod "pod-55df35c3-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 3.571904956s Jan 20 11:31:06.912: INFO: Pod "pod-55df35c3-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.76624816s Jan 20 11:31:08.929: INFO: Pod "pod-55df35c3-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.783136209s Jan 20 11:31:11.269: INFO: Pod "pod-55df35c3-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.123266558s Jan 20 11:31:13.290: INFO: Pod "pod-55df35c3-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.144170012s Jan 20 11:31:15.305: INFO: Pod "pod-55df35c3-3b78-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.158942383s STEP: Saw pod success Jan 20 11:31:15.305: INFO: Pod "pod-55df35c3-3b78-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:31:15.338: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-55df35c3-3b78-11ea-8bde-0242ac110005 container test-container: STEP: delete the pod Jan 20 11:31:16.869: INFO: Waiting for pod pod-55df35c3-3b78-11ea-8bde-0242ac110005 to disappear Jan 20 11:31:16.957: INFO: Pod pod-55df35c3-3b78-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:31:16.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-emptydir-5qpld" for this suite. Jan 20 11:31:22.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:31:23.019: INFO: namespace: e2e-tests-emptydir-5qpld, resource: bindings, ignored listing per whitelist Jan 20 11:31:23.103: INFO: namespace e2e-tests-emptydir-5qpld deletion completed in 6.136150037s • [SLOW TEST:22.286 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:31:23.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-63188998-3b78-11ea-8bde-0242ac110005 STEP: Creating a pod to test consume secrets Jan 20 11:31:23.315: INFO: Waiting up to 5m0s for pod "pod-secrets-631966dd-3b78-11ea-8bde-0242ac110005" in namespace "e2e-tests-secrets-grv6j" to be "success or failure" Jan 20 11:31:23.359: INFO: Pod "pod-secrets-631966dd-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 44.621289ms Jan 20 11:31:25.373: INFO: Pod "pod-secrets-631966dd-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058569385s Jan 20 11:31:27.383: INFO: Pod "pod-secrets-631966dd-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068167163s Jan 20 11:31:29.410: INFO: Pod "pod-secrets-631966dd-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0949432s Jan 20 11:31:31.426: INFO: Pod "pod-secrets-631966dd-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111447875s Jan 20 11:31:33.441: INFO: Pod "pod-secrets-631966dd-3b78-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.126126028s STEP: Saw pod success Jan 20 11:31:33.441: INFO: Pod "pod-secrets-631966dd-3b78-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:31:33.445: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-631966dd-3b78-11ea-8bde-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 20 11:31:33.530: INFO: Waiting for pod pod-secrets-631966dd-3b78-11ea-8bde-0242ac110005 to disappear Jan 20 11:31:33.631: INFO: Pod pod-secrets-631966dd-3b78-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:31:33.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-grv6j" for this suite. Jan 20 11:31:41.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:31:41.729: INFO: namespace: e2e-tests-secrets-grv6j, resource: bindings, ignored listing per whitelist Jan 20 11:31:41.951: INFO: namespace e2e-tests-secrets-grv6j deletion completed in 8.30517412s • [SLOW TEST:18.848 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:31:41.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating Redis RC Jan 20 11:31:42.226: INFO: namespace e2e-tests-kubectl-h56dc Jan 20 11:31:42.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-h56dc' Jan 20 11:31:42.764: INFO: stderr: "" Jan 20 11:31:42.764: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jan 20 11:31:43.784: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:31:43.784: INFO: Found 0 / 1 Jan 20 11:31:44.779: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:31:44.779: INFO: Found 0 / 1 Jan 20 11:31:45.784: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:31:45.784: INFO: Found 0 / 1 Jan 20 11:31:46.779: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:31:46.779: INFO: Found 0 / 1 Jan 20 11:31:48.078: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:31:48.079: INFO: Found 0 / 1 Jan 20 11:31:48.778: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:31:48.778: INFO: Found 0 / 1 Jan 20 11:31:49.809: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:31:49.809: INFO: Found 0 / 1 Jan 20 11:31:50.783: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:31:50.783: INFO: Found 0 / 1 Jan 20 11:31:51.780: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:31:51.781: INFO: Found 1 / 1 Jan 20 11:31:51.781: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 20 11:31:51.788: INFO: Selector matched 1 pods for map[app:redis] Jan 20 11:31:51.788: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 20 11:31:51.788: INFO: wait on redis-master startup in e2e-tests-kubectl-h56dc Jan 20 11:31:51.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-4nspc redis-master --namespace=e2e-tests-kubectl-h56dc' Jan 20 11:31:51.984: INFO: stderr: "" Jan 20 11:31:51.985: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 20 Jan 11:31:50.416 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 20 Jan 11:31:50.416 # Server started, Redis version 3.2.12\n1:M 20 Jan 11:31:50.417 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 20 Jan 11:31:50.417 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jan 20 11:31:51.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-h56dc' Jan 20 11:31:52.269: INFO: stderr: "" Jan 20 11:31:52.269: INFO: stdout: "service/rm2 exposed\n" Jan 20 11:31:52.372: INFO: Service rm2 in namespace e2e-tests-kubectl-h56dc found. STEP: exposing service Jan 20 11:31:54.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-h56dc' Jan 20 11:31:54.798: INFO: stderr: "" Jan 20 11:31:54.798: INFO: stdout: "service/rm3 exposed\n" Jan 20 11:31:54.935: INFO: Service rm3 in namespace e2e-tests-kubectl-h56dc found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:31:56.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-h56dc" for this suite. Jan 20 11:32:23.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:32:23.354: INFO: namespace: e2e-tests-kubectl-h56dc, resource: bindings, ignored listing per whitelist Jan 20 11:32:23.364: INFO: namespace e2e-tests-kubectl-h56dc deletion completed in 26.403883648s • [SLOW TEST:41.413 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:32:23.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-870fcbec-3b78-11ea-8bde-0242ac110005 STEP: Creating a pod to test consume secrets Jan 20 11:32:23.648: INFO: Waiting up to 5m0s for pod "pod-secrets-8710e715-3b78-11ea-8bde-0242ac110005" in namespace "e2e-tests-secrets-zgsxp" to be "success or failure" Jan 20 11:32:23.667: INFO: Pod "pod-secrets-8710e715-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 19.764158ms Jan 20 11:32:25.682: INFO: Pod "pod-secrets-8710e715-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033962921s Jan 20 11:32:27.763: INFO: Pod "pod-secrets-8710e715-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115438829s Jan 20 11:32:29.795: INFO: Pod "pod-secrets-8710e715-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147204084s Jan 20 11:32:31.819: INFO: Pod "pod-secrets-8710e715-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.171398411s Jan 20 11:32:33.924: INFO: Pod "pod-secrets-8710e715-3b78-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.276808326s STEP: Saw pod success Jan 20 11:32:33.925: INFO: Pod "pod-secrets-8710e715-3b78-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:32:33.932: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-8710e715-3b78-11ea-8bde-0242ac110005 container secret-volume-test: STEP: delete the pod Jan 20 11:32:34.296: INFO: Waiting for pod pod-secrets-8710e715-3b78-11ea-8bde-0242ac110005 to disappear Jan 20 11:32:34.309: INFO: Pod pod-secrets-8710e715-3b78-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:32:34.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-zgsxp" for this suite. Jan 20 11:32:40.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:32:40.713: INFO: namespace: e2e-tests-secrets-zgsxp, resource: bindings, ignored listing per whitelist Jan 20 11:32:40.722: INFO: namespace e2e-tests-secrets-zgsxp deletion completed in 6.326090773s • [SLOW TEST:17.357 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:32:40.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 20 11:32:40.973: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9156970a-3b78-11ea-8bde-0242ac110005" in namespace "e2e-tests-downward-api-nl9bn" to be "success or failure" Jan 20 11:32:41.007: INFO: Pod "downwardapi-volume-9156970a-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 34.099888ms Jan 20 11:32:43.135: INFO: Pod "downwardapi-volume-9156970a-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1621542s Jan 20 11:32:45.149: INFO: Pod "downwardapi-volume-9156970a-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.17589326s Jan 20 11:32:47.201: INFO: Pod "downwardapi-volume-9156970a-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.228122793s Jan 20 11:32:49.596: INFO: Pod "downwardapi-volume-9156970a-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.623182355s Jan 20 11:32:51.632: INFO: Pod "downwardapi-volume-9156970a-3b78-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.658618814s STEP: Saw pod success Jan 20 11:32:51.632: INFO: Pod "downwardapi-volume-9156970a-3b78-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:32:51.647: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-9156970a-3b78-11ea-8bde-0242ac110005 container client-container: STEP: delete the pod Jan 20 11:32:51.924: INFO: Waiting for pod downwardapi-volume-9156970a-3b78-11ea-8bde-0242ac110005 to disappear Jan 20 11:32:51.960: INFO: Pod downwardapi-volume-9156970a-3b78-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:32:51.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-nl9bn" for this suite. Jan 20 11:32:58.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:32:58.291: INFO: namespace: e2e-tests-downward-api-nl9bn, resource: bindings, ignored listing per whitelist Jan 20 11:32:58.390: INFO: namespace e2e-tests-downward-api-nl9bn deletion completed in 6.413286508s • [SLOW TEST:17.668 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:32:58.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test use defaults Jan 20 11:32:58.666: INFO: Waiting up to 5m0s for pod "client-containers-9bf0e0b5-3b78-11ea-8bde-0242ac110005" in namespace "e2e-tests-containers-j9zh9" to be "success or failure" Jan 20 11:32:58.672: INFO: Pod "client-containers-9bf0e0b5-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.69134ms Jan 20 11:33:00.690: INFO: Pod "client-containers-9bf0e0b5-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024196954s Jan 20 11:33:02.706: INFO: Pod "client-containers-9bf0e0b5-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039569426s Jan 20 11:33:04.713: INFO: Pod "client-containers-9bf0e0b5-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046663876s Jan 20 11:33:06.743: INFO: Pod "client-containers-9bf0e0b5-3b78-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076526456s STEP: Saw pod success Jan 20 11:33:06.743: INFO: Pod "client-containers-9bf0e0b5-3b78-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:33:06.747: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-9bf0e0b5-3b78-11ea-8bde-0242ac110005 container test-container: STEP: delete the pod Jan 20 11:33:06.809: INFO: Waiting for pod client-containers-9bf0e0b5-3b78-11ea-8bde-0242ac110005 to disappear Jan 20 11:33:06.817: INFO: Pod client-containers-9bf0e0b5-3b78-11ea-8bde-0242ac110005 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:33:06.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-containers-j9zh9" for this suite. Jan 20 11:33:13.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:33:13.457: INFO: namespace: e2e-tests-containers-j9zh9, resource: bindings, ignored listing per whitelist Jan 20 11:33:13.494: INFO: namespace e2e-tests-containers-j9zh9 deletion completed in 6.66681741s • [SLOW TEST:15.103 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:33:13.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Jan 20 11:33:13.858: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-2l8w4" to be "success or failure" Jan 20 11:33:14.013: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 154.559838ms Jan 20 11:33:16.034: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175908142s Jan 20 11:33:18.055: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196804975s Jan 20 11:33:20.103: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.245392711s Jan 20 11:33:22.367: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.509389272s Jan 20 11:33:24.912: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.054537646s Jan 20 11:33:26.928: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.069868557s STEP: Saw pod success Jan 20 11:33:26.928: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jan 20 11:33:26.937: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-host-path-test container test-container-1: STEP: delete the pod Jan 20 11:33:27.014: INFO: Waiting for pod pod-host-path-test to disappear Jan 20 11:33:27.068: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:33:27.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-2l8w4" for this suite. Jan 20 11:33:33.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:33:33.132: INFO: namespace: e2e-tests-hostpath-2l8w4, resource: bindings, ignored listing per whitelist Jan 20 11:33:33.382: INFO: namespace e2e-tests-hostpath-2l8w4 deletion completed in 6.305963476s • [SLOW TEST:19.888 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:33:33.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jan 20 11:33:34.080: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b107c593-3b78-11ea-8bde-0242ac110005" in namespace "e2e-tests-downward-api-l7ftn" to be "success or failure" Jan 20 11:33:34.298: INFO: Pod "downwardapi-volume-b107c593-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 218.211635ms Jan 20 11:33:36.309: INFO: Pod "downwardapi-volume-b107c593-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228864268s Jan 20 11:33:38.326: INFO: Pod "downwardapi-volume-b107c593-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.246012299s Jan 20 11:33:40.667: INFO: Pod "downwardapi-volume-b107c593-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.586901944s Jan 20 11:33:42.776: INFO: Pod "downwardapi-volume-b107c593-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.695716725s Jan 20 11:33:44.789: INFO: Pod "downwardapi-volume-b107c593-3b78-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.708981578s STEP: Saw pod success Jan 20 11:33:44.789: INFO: Pod "downwardapi-volume-b107c593-3b78-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:33:44.793: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b107c593-3b78-11ea-8bde-0242ac110005 container client-container: STEP: delete the pod Jan 20 11:33:45.335: INFO: Waiting for pod downwardapi-volume-b107c593-3b78-11ea-8bde-0242ac110005 to disappear Jan 20 11:33:45.659: INFO: Pod downwardapi-volume-b107c593-3b78-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:33:45.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-l7ftn" for this suite. Jan 20 11:33:51.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:33:52.211: INFO: namespace: e2e-tests-downward-api-l7ftn, resource: bindings, ignored listing per whitelist Jan 20 11:33:52.215: INFO: namespace e2e-tests-downward-api-l7ftn deletion completed in 6.539292001s • [SLOW TEST:18.833 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:33:52.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name projected-configmap-test-volume-bbf6028a-3b78-11ea-8bde-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 20 11:33:52.526: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bbf8ac7f-3b78-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-m8j5v" to be "success or failure" Jan 20 11:33:52.562: INFO: Pod "pod-projected-configmaps-bbf8ac7f-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 36.265761ms Jan 20 11:33:54.605: INFO: Pod "pod-projected-configmaps-bbf8ac7f-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079106436s Jan 20 11:33:56.615: INFO: Pod "pod-projected-configmaps-bbf8ac7f-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089137283s Jan 20 11:33:58.679: INFO: Pod "pod-projected-configmaps-bbf8ac7f-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152790308s Jan 20 11:34:00.726: INFO: Pod "pod-projected-configmaps-bbf8ac7f-3b78-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.200210052s STEP: Saw pod success Jan 20 11:34:00.726: INFO: Pod "pod-projected-configmaps-bbf8ac7f-3b78-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:34:00.735: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-bbf8ac7f-3b78-11ea-8bde-0242ac110005 container projected-configmap-volume-test: STEP: delete the pod Jan 20 11:34:00.817: INFO: Waiting for pod pod-projected-configmaps-bbf8ac7f-3b78-11ea-8bde-0242ac110005 to disappear Jan 20 11:34:00.892: INFO: Pod pod-projected-configmaps-bbf8ac7f-3b78-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:34:00.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-m8j5v" for this suite. Jan 20 11:34:06.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:34:07.020: INFO: namespace: e2e-tests-projected-m8j5v, resource: bindings, ignored listing per whitelist Jan 20 11:34:07.200: INFO: namespace e2e-tests-projected-m8j5v deletion completed in 6.290672384s • [SLOW TEST:14.984 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:34:07.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap e2e-tests-configmap-4j88q/configmap-test-c4f52acd-3b78-11ea-8bde-0242ac110005 STEP: Creating a pod to test consume configMaps Jan 20 11:34:07.505: INFO: Waiting up to 5m0s for pod "pod-configmaps-c4f6732f-3b78-11ea-8bde-0242ac110005" in namespace "e2e-tests-configmap-4j88q" to be "success or failure" Jan 20 11:34:07.623: INFO: Pod "pod-configmaps-c4f6732f-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 118.516654ms Jan 20 11:34:09.636: INFO: Pod "pod-configmaps-c4f6732f-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131017249s Jan 20 11:34:11.668: INFO: Pod "pod-configmaps-c4f6732f-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163209975s Jan 20 11:34:13.688: INFO: Pod "pod-configmaps-c4f6732f-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.183184761s Jan 20 11:34:16.119: INFO: Pod "pod-configmaps-c4f6732f-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.613966961s Jan 20 11:34:18.506: INFO: Pod "pod-configmaps-c4f6732f-3b78-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.001167272s STEP: Saw pod success Jan 20 11:34:18.506: INFO: Pod "pod-configmaps-c4f6732f-3b78-11ea-8bde-0242ac110005" satisfied condition "success or failure" Jan 20 11:34:18.546: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-c4f6732f-3b78-11ea-8bde-0242ac110005 container env-test: STEP: delete the pod Jan 20 11:34:18.871: INFO: Waiting for pod pod-configmaps-c4f6732f-3b78-11ea-8bde-0242ac110005 to disappear Jan 20 11:34:18.886: INFO: Pod pod-configmaps-c4f6732f-3b78-11ea-8bde-0242ac110005 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jan 20 11:34:18.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-configmap-4j88q" for this suite. Jan 20 11:34:24.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 20 11:34:25.001: INFO: namespace: e2e-tests-configmap-4j88q, resource: bindings, ignored listing per whitelist Jan 20 11:34:25.132: INFO: namespace e2e-tests-configmap-4j88q deletion completed in 6.237640612s • [SLOW TEST:17.932 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jan 20 11:34:25.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jan 20 11:34:25.386: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/:
alternatives.log
alternatives.l... (200; 30.644546ms)
Jan 20 11:34:25.397: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.158504ms)
Jan 20 11:34:25.406: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.552129ms)
Jan 20 11:34:25.414: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.947905ms)
Jan 20 11:34:25.421: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.372562ms)
Jan 20 11:34:25.431: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.87511ms)
Jan 20 11:34:25.440: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.095414ms)
Jan 20 11:34:25.507: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 67.082514ms)
Jan 20 11:34:25.525: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.084654ms)
Jan 20 11:34:25.531: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.659757ms)
Jan 20 11:34:25.538: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.594052ms)
Jan 20 11:34:25.544: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.509701ms)
Jan 20 11:34:25.551: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.772776ms)
Jan 20 11:34:25.558: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.997586ms)
Jan 20 11:34:25.564: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.56355ms)
Jan 20 11:34:25.570: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.405188ms)
Jan 20 11:34:25.576: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.763719ms)
Jan 20 11:34:25.582: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.575147ms)
Jan 20 11:34:25.587: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.640858ms)
Jan 20 11:34:25.594: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.384701ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:34:25.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-zdxxr" for this suite.
Jan 20 11:34:31.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:34:31.784: INFO: namespace: e2e-tests-proxy-zdxxr, resource: bindings, ignored listing per whitelist
Jan 20 11:34:31.790: INFO: namespace e2e-tests-proxy-zdxxr deletion completed in 6.190984117s

• [SLOW TEST:6.658 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:34:31.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 20 11:34:32.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-l5fqh'
Jan 20 11:34:32.244: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 20 11:34:32.244: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jan 20 11:34:36.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-l5fqh'
Jan 20 11:34:36.569: INFO: stderr: ""
Jan 20 11:34:36.569: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:34:36.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-l5fqh" for this suite.
Jan 20 11:34:42.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:34:42.855: INFO: namespace: e2e-tests-kubectl-l5fqh, resource: bindings, ignored listing per whitelist
Jan 20 11:34:42.867: INFO: namespace e2e-tests-kubectl-l5fqh deletion completed in 6.292528552s

• [SLOW TEST:11.077 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:34:42.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 20 11:34:43.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-hvhb7'
Jan 20 11:34:43.207: INFO: stderr: ""
Jan 20 11:34:43.207: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Jan 20 11:34:43.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-hvhb7'
Jan 20 11:34:52.717: INFO: stderr: ""
Jan 20 11:34:52.718: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:34:52.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hvhb7" for this suite.
Jan 20 11:34:58.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:34:58.828: INFO: namespace: e2e-tests-kubectl-hvhb7, resource: bindings, ignored listing per whitelist
Jan 20 11:34:59.090: INFO: namespace e2e-tests-kubectl-hvhb7 deletion completed in 6.356893247s

• [SLOW TEST:16.222 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:34:59.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 20 11:34:59.227: INFO: Creating ReplicaSet my-hostname-basic-e3ce19f0-3b78-11ea-8bde-0242ac110005
Jan 20 11:34:59.342: INFO: Pod name my-hostname-basic-e3ce19f0-3b78-11ea-8bde-0242ac110005: Found 0 pods out of 1
Jan 20 11:35:04.359: INFO: Pod name my-hostname-basic-e3ce19f0-3b78-11ea-8bde-0242ac110005: Found 1 pods out of 1
Jan 20 11:35:04.359: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e3ce19f0-3b78-11ea-8bde-0242ac110005" is running
Jan 20 11:35:08.380: INFO: Pod "my-hostname-basic-e3ce19f0-3b78-11ea-8bde-0242ac110005-5b6ml" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 11:34:59 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 11:34:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e3ce19f0-3b78-11ea-8bde-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 11:34:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-e3ce19f0-3b78-11ea-8bde-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 11:34:59 +0000 UTC Reason: Message:}])
Jan 20 11:35:08.380: INFO: Trying to dial the pod
Jan 20 11:35:13.421: INFO: Controller my-hostname-basic-e3ce19f0-3b78-11ea-8bde-0242ac110005: Got expected result from replica 1 [my-hostname-basic-e3ce19f0-3b78-11ea-8bde-0242ac110005-5b6ml]: "my-hostname-basic-e3ce19f0-3b78-11ea-8bde-0242ac110005-5b6ml", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:35:13.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-7gm6p" for this suite.
Jan 20 11:35:19.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:35:19.623: INFO: namespace: e2e-tests-replicaset-7gm6p, resource: bindings, ignored listing per whitelist
Jan 20 11:35:19.666: INFO: namespace e2e-tests-replicaset-7gm6p deletion completed in 6.235227449s

• [SLOW TEST:20.575 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:35:19.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-f015d433-3b78-11ea-8bde-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 20 11:35:19.869: INFO: Waiting up to 5m0s for pod "pod-configmaps-f0169616-3b78-11ea-8bde-0242ac110005" in namespace "e2e-tests-configmap-cvllp" to be "success or failure"
Jan 20 11:35:19.880: INFO: Pod "pod-configmaps-f0169616-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.501747ms
Jan 20 11:35:21.906: INFO: Pod "pod-configmaps-f0169616-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035886522s
Jan 20 11:35:23.917: INFO: Pod "pod-configmaps-f0169616-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046947903s
Jan 20 11:35:25.944: INFO: Pod "pod-configmaps-f0169616-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07443955s
Jan 20 11:35:28.380: INFO: Pod "pod-configmaps-f0169616-3b78-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.510384002s
Jan 20 11:35:30.401: INFO: Pod "pod-configmaps-f0169616-3b78-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.530904846s
STEP: Saw pod success
Jan 20 11:35:30.401: INFO: Pod "pod-configmaps-f0169616-3b78-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 11:35:30.410: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-f0169616-3b78-11ea-8bde-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 20 11:35:30.578: INFO: Waiting for pod pod-configmaps-f0169616-3b78-11ea-8bde-0242ac110005 to disappear
Jan 20 11:35:30.734: INFO: Pod pod-configmaps-f0169616-3b78-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:35:30.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-cvllp" for this suite.
Jan 20 11:35:38.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:35:38.899: INFO: namespace: e2e-tests-configmap-cvllp, resource: bindings, ignored listing per whitelist
Jan 20 11:35:38.920: INFO: namespace e2e-tests-configmap-cvllp deletion completed in 8.179737721s

• [SLOW TEST:19.254 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:35:38.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jan 20 11:35:39.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-mtl62 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 20 11:35:50.002: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0120 11:35:48.077283    1495 log.go:172] (0xc0001386e0) (0xc0006512c0) Create stream\nI0120 11:35:48.077632    1495 log.go:172] (0xc0001386e0) (0xc0006512c0) Stream added, broadcasting: 1\nI0120 11:35:48.082896    1495 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0120 11:35:48.082954    1495 log.go:172] (0xc0001386e0) (0xc00039ad20) Create stream\nI0120 11:35:48.082969    1495 log.go:172] (0xc0001386e0) (0xc00039ad20) Stream added, broadcasting: 3\nI0120 11:35:48.084284    1495 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0120 11:35:48.084317    1495 log.go:172] (0xc0001386e0) (0xc00074e280) Create stream\nI0120 11:35:48.084327    1495 log.go:172] (0xc0001386e0) (0xc00074e280) Stream added, broadcasting: 5\nI0120 11:35:48.085573    1495 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0120 11:35:48.085622    1495 log.go:172] (0xc0001386e0) (0xc000651360) Create stream\nI0120 11:35:48.085637    1495 log.go:172] (0xc0001386e0) (0xc000651360) Stream added, broadcasting: 7\nI0120 11:35:48.087047    1495 log.go:172] (0xc0001386e0) Reply frame received for 7\nI0120 11:35:48.087447    1495 log.go:172] (0xc00039ad20) (3) Writing data frame\nI0120 11:35:48.087626    1495 log.go:172] (0xc00039ad20) (3) Writing data frame\nI0120 11:35:48.092266    1495 log.go:172] (0xc0001386e0) Data frame received for 5\nI0120 11:35:48.092326    1495 log.go:172] (0xc00074e280) (5) Data frame handling\nI0120 11:35:48.092401    1495 log.go:172] (0xc00074e280) (5) Data frame sent\nI0120 11:35:48.100111    1495 log.go:172] (0xc0001386e0) Data frame received for 5\nI0120 11:35:48.100136    1495 log.go:172] (0xc00074e280) (5) Data frame handling\nI0120 11:35:48.100143    1495 log.go:172] (0xc00074e280) (5) Data frame sent\nI0120 11:35:49.886729    1495 log.go:172] (0xc0001386e0) (0xc00039ad20) Stream removed, broadcasting: 3\nI0120 11:35:49.886903    1495 log.go:172] (0xc0001386e0) Data frame received for 1\nI0120 11:35:49.886977    1495 log.go:172] (0xc0001386e0) (0xc00074e280) Stream removed, broadcasting: 5\nI0120 11:35:49.887059    1495 log.go:172] (0xc0006512c0) (1) Data frame handling\nI0120 11:35:49.887111    1495 log.go:172] (0xc0006512c0) (1) Data frame sent\nI0120 11:35:49.887146    1495 log.go:172] (0xc0001386e0) (0xc000651360) Stream removed, broadcasting: 7\nI0120 11:35:49.887195    1495 log.go:172] (0xc0001386e0) (0xc0006512c0) Stream removed, broadcasting: 1\nI0120 11:35:49.887212    1495 log.go:172] (0xc0001386e0) Go away received\nI0120 11:35:49.887580    1495 log.go:172] (0xc0001386e0) (0xc0006512c0) Stream removed, broadcasting: 1\nI0120 11:35:49.887615    1495 log.go:172] (0xc0001386e0) (0xc00039ad20) Stream removed, broadcasting: 3\nI0120 11:35:49.887626    1495 log.go:172] (0xc0001386e0) (0xc00074e280) Stream removed, broadcasting: 5\nI0120 11:35:49.887640    1495 log.go:172] (0xc0001386e0) (0xc000651360) Stream removed, broadcasting: 7\n"
Jan 20 11:35:50.003: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:35:52.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mtl62" for this suite.
Jan 20 11:36:04.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:36:04.159: INFO: namespace: e2e-tests-kubectl-mtl62, resource: bindings, ignored listing per whitelist
Jan 20 11:36:04.228: INFO: namespace e2e-tests-kubectl-mtl62 deletion completed in 12.196495633s

• [SLOW TEST:25.308 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:36:04.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-tw5td
Jan 20 11:36:14.438: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-tw5td
STEP: checking the pod's current state and verifying that restartCount is present
Jan 20 11:36:14.442: INFO: Initial restart count of pod liveness-exec is 0
Jan 20 11:37:11.163: INFO: Restart count of pod e2e-tests-container-probe-tw5td/liveness-exec is now 1 (56.720955707s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:37:11.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-tw5td" for this suite.
Jan 20 11:37:19.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:37:19.624: INFO: namespace: e2e-tests-container-probe-tw5td, resource: bindings, ignored listing per whitelist
Jan 20 11:37:19.675: INFO: namespace e2e-tests-container-probe-tw5td deletion completed in 8.374955517s

• [SLOW TEST:75.447 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:37:19.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0120 11:38:00.312461       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 20 11:38:00.312: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:38:00.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-42qmk" for this suite.
Jan 20 11:38:24.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:38:24.526: INFO: namespace: e2e-tests-gc-42qmk, resource: bindings, ignored listing per whitelist
Jan 20 11:38:24.616: INFO: namespace e2e-tests-gc-42qmk deletion completed in 24.296786s

• [SLOW TEST:64.940 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:38:24.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 20 11:38:24.942: INFO: (0) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.047543ms)
Jan 20 11:38:24.958: INFO: (1) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.648152ms)
Jan 20 11:38:24.970: INFO: (2) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.329331ms)
Jan 20 11:38:25.108: INFO: (3) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 138.39188ms)
Jan 20 11:38:25.128: INFO: (4) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.862425ms)
Jan 20 11:38:25.136: INFO: (5) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.595247ms)
Jan 20 11:38:25.143: INFO: (6) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.542449ms)
Jan 20 11:38:25.154: INFO: (7) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.319476ms)
Jan 20 11:38:25.163: INFO: (8) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.563001ms)
Jan 20 11:38:25.172: INFO: (9) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.261875ms)
Jan 20 11:38:25.181: INFO: (10) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.008759ms)
Jan 20 11:38:25.192: INFO: (11) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.566381ms)
Jan 20 11:38:25.273: INFO: (12) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 80.842553ms)
Jan 20 11:38:25.292: INFO: (13) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.709699ms)
Jan 20 11:38:25.309: INFO: (14) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.218654ms)
Jan 20 11:38:25.324: INFO: (15) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.330522ms)
Jan 20 11:38:25.331: INFO: (16) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.769276ms)
Jan 20 11:38:25.341: INFO: (17) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.979836ms)
Jan 20 11:38:25.348: INFO: (18) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.46003ms)
Jan 20 11:38:25.353: INFO: (19) /api/v1/nodes/hunter-server-hu5at5svl7ps:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.099279ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:38:25.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-proxy-5jxcv" for this suite.
Jan 20 11:38:31.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:38:31.502: INFO: namespace: e2e-tests-proxy-5jxcv, resource: bindings, ignored listing per whitelist
Jan 20 11:38:31.539: INFO: namespace e2e-tests-proxy-5jxcv deletion completed in 6.181464291s

• [SLOW TEST:6.922 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:38:31.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-626b180b-3b79-11ea-8bde-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 20 11:38:31.679: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-626c6105-3b79-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-t7nk9" to be "success or failure"
Jan 20 11:38:31.787: INFO: Pod "pod-projected-secrets-626c6105-3b79-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 108.113868ms
Jan 20 11:38:33.806: INFO: Pod "pod-projected-secrets-626c6105-3b79-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126217341s
Jan 20 11:38:35.847: INFO: Pod "pod-projected-secrets-626c6105-3b79-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168119431s
Jan 20 11:38:37.878: INFO: Pod "pod-projected-secrets-626c6105-3b79-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.19902354s
Jan 20 11:38:39.897: INFO: Pod "pod-projected-secrets-626c6105-3b79-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.217475695s
Jan 20 11:38:41.927: INFO: Pod "pod-projected-secrets-626c6105-3b79-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.248021395s
STEP: Saw pod success
Jan 20 11:38:41.928: INFO: Pod "pod-projected-secrets-626c6105-3b79-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 11:38:41.943: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-626c6105-3b79-11ea-8bde-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 20 11:38:42.135: INFO: Waiting for pod pod-projected-secrets-626c6105-3b79-11ea-8bde-0242ac110005 to disappear
Jan 20 11:38:42.205: INFO: Pod pod-projected-secrets-626c6105-3b79-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:38:42.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-t7nk9" for this suite.
Jan 20 11:38:48.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:38:48.541: INFO: namespace: e2e-tests-projected-t7nk9, resource: bindings, ignored listing per whitelist
Jan 20 11:38:48.711: INFO: namespace e2e-tests-projected-t7nk9 deletion completed in 6.433953082s

• [SLOW TEST:17.172 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:38:48.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-6cc2709b-3b79-11ea-8bde-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 20 11:38:49.075: INFO: Waiting up to 5m0s for pod "pod-secrets-6ccbdd93-3b79-11ea-8bde-0242ac110005" in namespace "e2e-tests-secrets-pt7vb" to be "success or failure"
Jan 20 11:38:49.085: INFO: Pod "pod-secrets-6ccbdd93-3b79-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.876046ms
Jan 20 11:38:51.261: INFO: Pod "pod-secrets-6ccbdd93-3b79-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.185674314s
Jan 20 11:38:53.278: INFO: Pod "pod-secrets-6ccbdd93-3b79-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202838835s
Jan 20 11:38:55.543: INFO: Pod "pod-secrets-6ccbdd93-3b79-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.467239672s
Jan 20 11:38:57.552: INFO: Pod "pod-secrets-6ccbdd93-3b79-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.476037486s
Jan 20 11:38:59.571: INFO: Pod "pod-secrets-6ccbdd93-3b79-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.495730224s
STEP: Saw pod success
Jan 20 11:38:59.571: INFO: Pod "pod-secrets-6ccbdd93-3b79-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 11:38:59.580: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-6ccbdd93-3b79-11ea-8bde-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 20 11:38:59.957: INFO: Waiting for pod pod-secrets-6ccbdd93-3b79-11ea-8bde-0242ac110005 to disappear
Jan 20 11:38:59.976: INFO: Pod pod-secrets-6ccbdd93-3b79-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:38:59.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-pt7vb" for this suite.
Jan 20 11:39:08.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:39:08.153: INFO: namespace: e2e-tests-secrets-pt7vb, resource: bindings, ignored listing per whitelist
Jan 20 11:39:08.263: INFO: namespace e2e-tests-secrets-pt7vb deletion completed in 8.2817718s

• [SLOW TEST:19.552 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:39:08.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-785ea0c4-3b79-11ea-8bde-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 20 11:39:08.640: INFO: Waiting up to 5m0s for pod "pod-secrets-7875e8d7-3b79-11ea-8bde-0242ac110005" in namespace "e2e-tests-secrets-c8fzg" to be "success or failure"
Jan 20 11:39:08.672: INFO: Pod "pod-secrets-7875e8d7-3b79-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.418751ms
Jan 20 11:39:10.716: INFO: Pod "pod-secrets-7875e8d7-3b79-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075869239s
Jan 20 11:39:12.730: INFO: Pod "pod-secrets-7875e8d7-3b79-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089923665s
Jan 20 11:39:14.744: INFO: Pod "pod-secrets-7875e8d7-3b79-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103692503s
Jan 20 11:39:16.816: INFO: Pod "pod-secrets-7875e8d7-3b79-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.176594456s
STEP: Saw pod success
Jan 20 11:39:16.817: INFO: Pod "pod-secrets-7875e8d7-3b79-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 11:39:16.872: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-7875e8d7-3b79-11ea-8bde-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 20 11:39:16.986: INFO: Waiting for pod pod-secrets-7875e8d7-3b79-11ea-8bde-0242ac110005 to disappear
Jan 20 11:39:17.067: INFO: Pod pod-secrets-7875e8d7-3b79-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:39:17.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-c8fzg" for this suite.
Jan 20 11:39:23.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:39:23.312: INFO: namespace: e2e-tests-secrets-c8fzg, resource: bindings, ignored listing per whitelist
Jan 20 11:39:23.378: INFO: namespace e2e-tests-secrets-c8fzg deletion completed in 6.179397362s
STEP: Destroying namespace "e2e-tests-secret-namespace-88c7t" for this suite.
Jan 20 11:39:29.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:39:29.482: INFO: namespace: e2e-tests-secret-namespace-88c7t, resource: bindings, ignored listing per whitelist
Jan 20 11:39:29.571: INFO: namespace e2e-tests-secret-namespace-88c7t deletion completed in 6.192907797s

• [SLOW TEST:21.307 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:39:29.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 20 11:39:29.752: INFO: Waiting up to 5m0s for pod "pod-8509e4aa-3b79-11ea-8bde-0242ac110005" in namespace "e2e-tests-emptydir-785qk" to be "success or failure"
Jan 20 11:39:29.990: INFO: Pod "pod-8509e4aa-3b79-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 237.711921ms
Jan 20 11:39:32.009: INFO: Pod "pod-8509e4aa-3b79-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.256999481s
Jan 20 11:39:34.040: INFO: Pod "pod-8509e4aa-3b79-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.287940363s
Jan 20 11:39:36.214: INFO: Pod "pod-8509e4aa-3b79-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.461711037s
Jan 20 11:39:38.225: INFO: Pod "pod-8509e4aa-3b79-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.472941146s
Jan 20 11:39:40.242: INFO: Pod "pod-8509e4aa-3b79-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.489520807s
STEP: Saw pod success
Jan 20 11:39:40.242: INFO: Pod "pod-8509e4aa-3b79-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 11:39:40.247: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-8509e4aa-3b79-11ea-8bde-0242ac110005 container test-container: 
STEP: delete the pod
Jan 20 11:39:40.370: INFO: Waiting for pod pod-8509e4aa-3b79-11ea-8bde-0242ac110005 to disappear
Jan 20 11:39:40.428: INFO: Pod pod-8509e4aa-3b79-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:39:40.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-785qk" for this suite.
Jan 20 11:39:46.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:39:46.751: INFO: namespace: e2e-tests-emptydir-785qk, resource: bindings, ignored listing per whitelist
Jan 20 11:39:46.785: INFO: namespace e2e-tests-emptydir-785qk deletion completed in 6.328036868s

• [SLOW TEST:17.213 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:39:46.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-n6p5z
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Jan 20 11:39:47.060: INFO: Found 0 stateful pods, waiting for 3
Jan 20 11:39:57.129: INFO: Found 2 stateful pods, waiting for 3
Jan 20 11:40:07.424: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 11:40:07.424: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 11:40:07.425: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 20 11:40:17.078: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 11:40:17.078: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 11:40:17.078: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 11:40:17.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-n6p5z ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 20 11:40:17.979: INFO: stderr: "I0120 11:40:17.339243    1521 log.go:172] (0xc00073a370) (0xc0007c2640) Create stream\nI0120 11:40:17.339623    1521 log.go:172] (0xc00073a370) (0xc0007c2640) Stream added, broadcasting: 1\nI0120 11:40:17.347128    1521 log.go:172] (0xc00073a370) Reply frame received for 1\nI0120 11:40:17.347180    1521 log.go:172] (0xc00073a370) (0xc0007c26e0) Create stream\nI0120 11:40:17.347188    1521 log.go:172] (0xc00073a370) (0xc0007c26e0) Stream added, broadcasting: 3\nI0120 11:40:17.348369    1521 log.go:172] (0xc00073a370) Reply frame received for 3\nI0120 11:40:17.348407    1521 log.go:172] (0xc00073a370) (0xc000652be0) Create stream\nI0120 11:40:17.348416    1521 log.go:172] (0xc00073a370) (0xc000652be0) Stream added, broadcasting: 5\nI0120 11:40:17.349375    1521 log.go:172] (0xc00073a370) Reply frame received for 5\nI0120 11:40:17.675390    1521 log.go:172] (0xc00073a370) Data frame received for 3\nI0120 11:40:17.675541    1521 log.go:172] (0xc0007c26e0) (3) Data frame handling\nI0120 11:40:17.675579    1521 log.go:172] (0xc0007c26e0) (3) Data frame sent\nI0120 11:40:17.954294    1521 log.go:172] (0xc00073a370) Data frame received for 1\nI0120 11:40:17.954480    1521 log.go:172] (0xc0007c2640) (1) Data frame handling\nI0120 11:40:17.954562    1521 log.go:172] (0xc0007c2640) (1) Data frame sent\nI0120 11:40:17.954619    1521 log.go:172] (0xc00073a370) (0xc0007c2640) Stream removed, broadcasting: 1\nI0120 11:40:17.956173    1521 log.go:172] (0xc00073a370) (0xc0007c26e0) Stream removed, broadcasting: 3\nI0120 11:40:17.956551    1521 log.go:172] (0xc00073a370) (0xc000652be0) Stream removed, broadcasting: 5\nI0120 11:40:17.956696    1521 log.go:172] (0xc00073a370) Go away received\nI0120 11:40:17.956822    1521 log.go:172] (0xc00073a370) (0xc0007c2640) Stream removed, broadcasting: 1\nI0120 11:40:17.957051    1521 log.go:172] (0xc00073a370) (0xc0007c26e0) Stream removed, broadcasting: 3\nI0120 11:40:17.957114    1521 log.go:172] (0xc00073a370) (0xc000652be0) Stream removed, broadcasting: 5\n"
Jan 20 11:40:17.980: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 20 11:40:17.980: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 20 11:40:18.371: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 20 11:40:28.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-n6p5z ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 11:40:29.203: INFO: stderr: "I0120 11:40:28.850502    1544 log.go:172] (0xc000138580) (0xc000395360) Create stream\nI0120 11:40:28.850856    1544 log.go:172] (0xc000138580) (0xc000395360) Stream added, broadcasting: 1\nI0120 11:40:28.859797    1544 log.go:172] (0xc000138580) Reply frame received for 1\nI0120 11:40:28.860065    1544 log.go:172] (0xc000138580) (0xc00073a000) Create stream\nI0120 11:40:28.860132    1544 log.go:172] (0xc000138580) (0xc00073a000) Stream added, broadcasting: 3\nI0120 11:40:28.861968    1544 log.go:172] (0xc000138580) Reply frame received for 3\nI0120 11:40:28.862043    1544 log.go:172] (0xc000138580) (0xc000395400) Create stream\nI0120 11:40:28.862063    1544 log.go:172] (0xc000138580) (0xc000395400) Stream added, broadcasting: 5\nI0120 11:40:28.869421    1544 log.go:172] (0xc000138580) Reply frame received for 5\nI0120 11:40:29.020514    1544 log.go:172] (0xc000138580) Data frame received for 3\nI0120 11:40:29.020628    1544 log.go:172] (0xc00073a000) (3) Data frame handling\nI0120 11:40:29.020655    1544 log.go:172] (0xc00073a000) (3) Data frame sent\nI0120 11:40:29.186320    1544 log.go:172] (0xc000138580) (0xc00073a000) Stream removed, broadcasting: 3\nI0120 11:40:29.187083    1544 log.go:172] (0xc000138580) Data frame received for 1\nI0120 11:40:29.187121    1544 log.go:172] (0xc000138580) (0xc000395400) Stream removed, broadcasting: 5\nI0120 11:40:29.187245    1544 log.go:172] (0xc000395360) (1) Data frame handling\nI0120 11:40:29.187289    1544 log.go:172] (0xc000395360) (1) Data frame sent\nI0120 11:40:29.187315    1544 log.go:172] (0xc000138580) (0xc000395360) Stream removed, broadcasting: 1\nI0120 11:40:29.187350    1544 log.go:172] (0xc000138580) Go away received\nI0120 11:40:29.188949    1544 log.go:172] (0xc000138580) (0xc000395360) Stream removed, broadcasting: 1\nI0120 11:40:29.189112    1544 log.go:172] (0xc000138580) (0xc00073a000) Stream removed, broadcasting: 3\nI0120 11:40:29.189148    1544 log.go:172] (0xc000138580) (0xc000395400) Stream removed, broadcasting: 5\n"
Jan 20 11:40:29.204: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 20 11:40:29.204: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 20 11:40:39.263: INFO: Waiting for StatefulSet e2e-tests-statefulset-n6p5z/ss2 to complete update
Jan 20 11:40:39.263: INFO: Waiting for Pod e2e-tests-statefulset-n6p5z/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 20 11:40:39.263: INFO: Waiting for Pod e2e-tests-statefulset-n6p5z/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 20 11:40:39.263: INFO: Waiting for Pod e2e-tests-statefulset-n6p5z/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 20 11:40:49.340: INFO: Waiting for StatefulSet e2e-tests-statefulset-n6p5z/ss2 to complete update
Jan 20 11:40:49.340: INFO: Waiting for Pod e2e-tests-statefulset-n6p5z/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 20 11:40:49.340: INFO: Waiting for Pod e2e-tests-statefulset-n6p5z/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 20 11:40:59.423: INFO: Waiting for StatefulSet e2e-tests-statefulset-n6p5z/ss2 to complete update
Jan 20 11:40:59.423: INFO: Waiting for Pod e2e-tests-statefulset-n6p5z/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 20 11:41:09.288: INFO: Waiting for StatefulSet e2e-tests-statefulset-n6p5z/ss2 to complete update
Jan 20 11:41:09.288: INFO: Waiting for Pod e2e-tests-statefulset-n6p5z/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 20 11:41:19.287: INFO: Waiting for StatefulSet e2e-tests-statefulset-n6p5z/ss2 to complete update
Jan 20 11:41:19.287: INFO: Waiting for Pod e2e-tests-statefulset-n6p5z/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 20 11:41:29.296: INFO: Waiting for StatefulSet e2e-tests-statefulset-n6p5z/ss2 to complete update
STEP: Rolling back to a previous revision
Jan 20 11:41:39.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-n6p5z ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 20 11:41:39.954: INFO: stderr: "I0120 11:41:39.550209    1567 log.go:172] (0xc000752370) (0xc0007c4640) Create stream\nI0120 11:41:39.550500    1567 log.go:172] (0xc000752370) (0xc0007c4640) Stream added, broadcasting: 1\nI0120 11:41:39.560496    1567 log.go:172] (0xc000752370) Reply frame received for 1\nI0120 11:41:39.560541    1567 log.go:172] (0xc000752370) (0xc0007c46e0) Create stream\nI0120 11:41:39.560552    1567 log.go:172] (0xc000752370) (0xc0007c46e0) Stream added, broadcasting: 3\nI0120 11:41:39.564465    1567 log.go:172] (0xc000752370) Reply frame received for 3\nI0120 11:41:39.564509    1567 log.go:172] (0xc000752370) (0xc0006bec80) Create stream\nI0120 11:41:39.564524    1567 log.go:172] (0xc000752370) (0xc0006bec80) Stream added, broadcasting: 5\nI0120 11:41:39.569632    1567 log.go:172] (0xc000752370) Reply frame received for 5\nI0120 11:41:39.809203    1567 log.go:172] (0xc000752370) Data frame received for 3\nI0120 11:41:39.809282    1567 log.go:172] (0xc0007c46e0) (3) Data frame handling\nI0120 11:41:39.809311    1567 log.go:172] (0xc0007c46e0) (3) Data frame sent\nI0120 11:41:39.938949    1567 log.go:172] (0xc000752370) Data frame received for 1\nI0120 11:41:39.939178    1567 log.go:172] (0xc000752370) (0xc0007c46e0) Stream removed, broadcasting: 3\nI0120 11:41:39.939343    1567 log.go:172] (0xc0007c4640) (1) Data frame handling\nI0120 11:41:39.939384    1567 log.go:172] (0xc0007c4640) (1) Data frame sent\nI0120 11:41:39.939397    1567 log.go:172] (0xc000752370) (0xc0007c4640) Stream removed, broadcasting: 1\nI0120 11:41:39.940405    1567 log.go:172] (0xc000752370) (0xc0006bec80) Stream removed, broadcasting: 5\nI0120 11:41:39.940473    1567 log.go:172] (0xc000752370) (0xc0007c4640) Stream removed, broadcasting: 1\nI0120 11:41:39.940485    1567 log.go:172] (0xc000752370) (0xc0007c46e0) Stream removed, broadcasting: 3\nI0120 11:41:39.940497    1567 log.go:172] (0xc000752370) (0xc0006bec80) Stream removed, broadcasting: 5\nI0120 11:41:39.940842    1567 log.go:172] (0xc000752370) Go away received\n"
Jan 20 11:41:39.955: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 20 11:41:39.955: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 20 11:41:50.048: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 20 11:42:00.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-n6p5z ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 11:42:01.078: INFO: stderr: "I0120 11:42:00.769423    1590 log.go:172] (0xc00089c2c0) (0xc000702640) Create stream\nI0120 11:42:00.770088    1590 log.go:172] (0xc00089c2c0) (0xc000702640) Stream added, broadcasting: 1\nI0120 11:42:00.777555    1590 log.go:172] (0xc00089c2c0) Reply frame received for 1\nI0120 11:42:00.777620    1590 log.go:172] (0xc00089c2c0) (0xc000790e60) Create stream\nI0120 11:42:00.777629    1590 log.go:172] (0xc00089c2c0) (0xc000790e60) Stream added, broadcasting: 3\nI0120 11:42:00.781383    1590 log.go:172] (0xc00089c2c0) Reply frame received for 3\nI0120 11:42:00.781415    1590 log.go:172] (0xc00089c2c0) (0xc00033a000) Create stream\nI0120 11:42:00.781424    1590 log.go:172] (0xc00089c2c0) (0xc00033a000) Stream added, broadcasting: 5\nI0120 11:42:00.782188    1590 log.go:172] (0xc00089c2c0) Reply frame received for 5\nI0120 11:42:00.916134    1590 log.go:172] (0xc00089c2c0) Data frame received for 3\nI0120 11:42:00.916294    1590 log.go:172] (0xc000790e60) (3) Data frame handling\nI0120 11:42:00.916366    1590 log.go:172] (0xc000790e60) (3) Data frame sent\nI0120 11:42:01.062010    1590 log.go:172] (0xc00089c2c0) Data frame received for 1\nI0120 11:42:01.062246    1590 log.go:172] (0xc00089c2c0) (0xc000790e60) Stream removed, broadcasting: 3\nI0120 11:42:01.062486    1590 log.go:172] (0xc000702640) (1) Data frame handling\nI0120 11:42:01.062505    1590 log.go:172] (0xc000702640) (1) Data frame sent\nI0120 11:42:01.062525    1590 log.go:172] (0xc00089c2c0) (0xc000702640) Stream removed, broadcasting: 1\nI0120 11:42:01.062676    1590 log.go:172] (0xc00089c2c0) (0xc00033a000) Stream removed, broadcasting: 5\nI0120 11:42:01.062791    1590 log.go:172] (0xc00089c2c0) Go away received\nI0120 11:42:01.063972    1590 log.go:172] (0xc00089c2c0) (0xc000702640) Stream removed, broadcasting: 1\nI0120 11:42:01.063992    1590 log.go:172] (0xc00089c2c0) (0xc000790e60) Stream removed, broadcasting: 3\nI0120 11:42:01.064002    1590 log.go:172] (0xc00089c2c0) (0xc00033a000) Stream removed, broadcasting: 5\n"
Jan 20 11:42:01.079: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 20 11:42:01.079: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 20 11:42:11.165: INFO: Waiting for StatefulSet e2e-tests-statefulset-n6p5z/ss2 to complete update
Jan 20 11:42:11.165: INFO: Waiting for Pod e2e-tests-statefulset-n6p5z/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 20 11:42:11.165: INFO: Waiting for Pod e2e-tests-statefulset-n6p5z/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 20 11:42:11.165: INFO: Waiting for Pod e2e-tests-statefulset-n6p5z/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 20 11:42:21.663: INFO: Waiting for StatefulSet e2e-tests-statefulset-n6p5z/ss2 to complete update
Jan 20 11:42:21.663: INFO: Waiting for Pod e2e-tests-statefulset-n6p5z/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 20 11:42:21.663: INFO: Waiting for Pod e2e-tests-statefulset-n6p5z/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 20 11:42:31.202: INFO: Waiting for StatefulSet e2e-tests-statefulset-n6p5z/ss2 to complete update
Jan 20 11:42:31.202: INFO: Waiting for Pod e2e-tests-statefulset-n6p5z/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 20 11:42:31.202: INFO: Waiting for Pod e2e-tests-statefulset-n6p5z/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 20 11:42:41.190: INFO: Waiting for StatefulSet e2e-tests-statefulset-n6p5z/ss2 to complete update
Jan 20 11:42:41.190: INFO: Waiting for Pod e2e-tests-statefulset-n6p5z/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan 20 11:42:51.190: INFO: Waiting for StatefulSet e2e-tests-statefulset-n6p5z/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 20 11:43:01.188: INFO: Deleting all statefulset in ns e2e-tests-statefulset-n6p5z
Jan 20 11:43:01.193: INFO: Scaling statefulset ss2 to 0
Jan 20 11:43:31.246: INFO: Waiting for statefulset status.replicas updated to 0
Jan 20 11:43:31.253: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:43:31.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-n6p5z" for this suite.
Jan 20 11:43:39.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:43:39.796: INFO: namespace: e2e-tests-statefulset-n6p5z, resource: bindings, ignored listing per whitelist
Jan 20 11:43:39.861: INFO: namespace e2e-tests-statefulset-n6p5z deletion completed in 8.365548593s

• [SLOW TEST:233.076 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:43:39.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:43:47.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-zcwk9" for this suite.
Jan 20 11:43:53.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:43:53.568: INFO: namespace: e2e-tests-namespaces-zcwk9, resource: bindings, ignored listing per whitelist
Jan 20 11:43:53.606: INFO: namespace e2e-tests-namespaces-zcwk9 deletion completed in 6.237982351s
STEP: Destroying namespace "e2e-tests-nsdeletetest-rl5gr" for this suite.
Jan 20 11:43:53.621: INFO: Namespace e2e-tests-nsdeletetest-rl5gr was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-xwzsq" for this suite.
Jan 20 11:43:59.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:43:59.751: INFO: namespace: e2e-tests-nsdeletetest-xwzsq, resource: bindings, ignored listing per whitelist
Jan 20 11:43:59.855: INFO: namespace e2e-tests-nsdeletetest-xwzsq deletion completed in 6.233120676s

• [SLOW TEST:19.994 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:43:59.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jan 20 11:44:00.163: INFO: Waiting up to 5m0s for pod "var-expansion-2635ee4c-3b7a-11ea-8bde-0242ac110005" in namespace "e2e-tests-var-expansion-khvcb" to be "success or failure"
Jan 20 11:44:00.224: INFO: Pod "var-expansion-2635ee4c-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 61.375765ms
Jan 20 11:44:02.251: INFO: Pod "var-expansion-2635ee4c-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088293138s
Jan 20 11:44:04.265: INFO: Pod "var-expansion-2635ee4c-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101879141s
Jan 20 11:44:06.441: INFO: Pod "var-expansion-2635ee4c-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.277863718s
Jan 20 11:44:08.557: INFO: Pod "var-expansion-2635ee4c-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.39425151s
Jan 20 11:44:10.589: INFO: Pod "var-expansion-2635ee4c-3b7a-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.425758089s
STEP: Saw pod success
Jan 20 11:44:10.589: INFO: Pod "var-expansion-2635ee4c-3b7a-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 11:44:10.606: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-2635ee4c-3b7a-11ea-8bde-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 20 11:44:10.703: INFO: Waiting for pod var-expansion-2635ee4c-3b7a-11ea-8bde-0242ac110005 to disappear
Jan 20 11:44:10.715: INFO: Pod var-expansion-2635ee4c-3b7a-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:44:10.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-khvcb" for this suite.
Jan 20 11:44:16.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:44:16.954: INFO: namespace: e2e-tests-var-expansion-khvcb, resource: bindings, ignored listing per whitelist
Jan 20 11:44:17.077: INFO: namespace e2e-tests-var-expansion-khvcb deletion completed in 6.354752365s

• [SLOW TEST:17.222 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:44:17.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 20 11:44:25.315: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-306db4ec-3b7a-11ea-8bde-0242ac110005,GenerateName:,Namespace:e2e-tests-events-psj59,SelfLink:/api/v1/namespaces/e2e-tests-events-psj59/pods/send-events-306db4ec-3b7a-11ea-8bde-0242ac110005,UID:306e80f4-3b7a-11ea-a994-fa163e34d433,ResourceVersion:18846431,Generation:0,CreationTimestamp:2020-01-20 11:44:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 276667642,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-99tkm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-99tkm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-99tkm true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-server-hu5at5svl7ps,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00250e160} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00250e180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:44:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:44:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:44:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 11:44:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.1.240,PodIP:10.32.0.4,StartTime:2020-01-20 11:44:17 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-20 11:44:24 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://65fd2ecd49127bfce6123799f5eef8e9faebdfb84df6d40721d384087b8367f7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan 20 11:44:27.325: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 20 11:44:29.344: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:44:29.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-psj59" for this suite.
Jan 20 11:45:13.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:45:13.619: INFO: namespace: e2e-tests-events-psj59, resource: bindings, ignored listing per whitelist
Jan 20 11:45:13.673: INFO: namespace e2e-tests-events-psj59 deletion completed in 44.258013164s

• [SLOW TEST:56.597 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:45:13.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-5244e1ba-3b7a-11ea-8bde-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 20 11:45:14.175: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-525173c8-3b7a-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-h7442" to be "success or failure"
Jan 20 11:45:14.208: INFO: Pod "pod-projected-configmaps-525173c8-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 32.831748ms
Jan 20 11:45:16.235: INFO: Pod "pod-projected-configmaps-525173c8-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059161527s
Jan 20 11:45:18.258: INFO: Pod "pod-projected-configmaps-525173c8-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082441257s
Jan 20 11:45:20.279: INFO: Pod "pod-projected-configmaps-525173c8-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103476121s
Jan 20 11:45:22.324: INFO: Pod "pod-projected-configmaps-525173c8-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.148020549s
Jan 20 11:45:24.332: INFO: Pod "pod-projected-configmaps-525173c8-3b7a-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.15680043s
STEP: Saw pod success
Jan 20 11:45:24.332: INFO: Pod "pod-projected-configmaps-525173c8-3b7a-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 11:45:24.335: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-525173c8-3b7a-11ea-8bde-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 20 11:45:24.372: INFO: Waiting for pod pod-projected-configmaps-525173c8-3b7a-11ea-8bde-0242ac110005 to disappear
Jan 20 11:45:24.376: INFO: Pod pod-projected-configmaps-525173c8-3b7a-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:45:24.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-h7442" for this suite.
Jan 20 11:45:30.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:45:30.596: INFO: namespace: e2e-tests-projected-h7442, resource: bindings, ignored listing per whitelist
Jan 20 11:45:30.883: INFO: namespace e2e-tests-projected-h7442 deletion completed in 6.502173053s

• [SLOW TEST:17.209 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:45:30.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 20 11:45:31.387: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 20 11:45:31.433: INFO: Waiting for terminating namespaces to be deleted...
Jan 20 11:45:31.497: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 20 11:45:31.517: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 20 11:45:31.517: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 20 11:45:31.517: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 20 11:45:31.517: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 20 11:45:31.517: INFO: 	Container coredns ready: true, restart count 0
Jan 20 11:45:31.517: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 20 11:45:31.517: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 20 11:45:31.517: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 20 11:45:31.517: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 20 11:45:31.517: INFO: 	Container weave ready: true, restart count 0
Jan 20 11:45:31.517: INFO: 	Container weave-npc ready: true, restart count 0
Jan 20 11:45:31.517: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 20 11:45:31.517: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-61866497-3b7a-11ea-8bde-0242ac110005 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-61866497-3b7a-11ea-8bde-0242ac110005 off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label kubernetes.io/e2e-61866497-3b7a-11ea-8bde-0242ac110005
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:45:50.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-6h9jk" for this suite.
Jan 20 11:46:02.210: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:46:02.284: INFO: namespace: e2e-tests-sched-pred-6h9jk, resource: bindings, ignored listing per whitelist
Jan 20 11:46:02.336: INFO: namespace e2e-tests-sched-pred-6h9jk deletion completed in 12.228371867s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:31.453 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:46:02.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 20 11:46:02.694: INFO: Waiting up to 5m0s for pod "downward-api-6f3f9fdd-3b7a-11ea-8bde-0242ac110005" in namespace "e2e-tests-downward-api-l69gn" to be "success or failure"
Jan 20 11:46:02.724: INFO: Pod "downward-api-6f3f9fdd-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.960353ms
Jan 20 11:46:04.887: INFO: Pod "downward-api-6f3f9fdd-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19272318s
Jan 20 11:46:06.909: INFO: Pod "downward-api-6f3f9fdd-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214087747s
Jan 20 11:46:08.928: INFO: Pod "downward-api-6f3f9fdd-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.233161358s
Jan 20 11:46:11.454: INFO: Pod "downward-api-6f3f9fdd-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.759785868s
Jan 20 11:46:13.480: INFO: Pod "downward-api-6f3f9fdd-3b7a-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.785796128s
STEP: Saw pod success
Jan 20 11:46:13.481: INFO: Pod "downward-api-6f3f9fdd-3b7a-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 11:46:13.491: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-6f3f9fdd-3b7a-11ea-8bde-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 20 11:46:13.884: INFO: Waiting for pod downward-api-6f3f9fdd-3b7a-11ea-8bde-0242ac110005 to disappear
Jan 20 11:46:13.988: INFO: Pod downward-api-6f3f9fdd-3b7a-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:46:13.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-l69gn" for this suite.
Jan 20 11:46:20.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:46:20.347: INFO: namespace: e2e-tests-downward-api-l69gn, resource: bindings, ignored listing per whitelist
Jan 20 11:46:20.353: INFO: namespace e2e-tests-downward-api-l69gn deletion completed in 6.347691321s

• [SLOW TEST:18.016 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:46:20.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 20 11:46:20.599: INFO: Waiting up to 5m0s for pod "downwardapi-volume-79e9e09c-3b7a-11ea-8bde-0242ac110005" in namespace "e2e-tests-downward-api-xjvcj" to be "success or failure"
Jan 20 11:46:20.609: INFO: Pod "downwardapi-volume-79e9e09c-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.862236ms
Jan 20 11:46:22.662: INFO: Pod "downwardapi-volume-79e9e09c-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06268887s
Jan 20 11:46:24.676: INFO: Pod "downwardapi-volume-79e9e09c-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076829815s
Jan 20 11:46:26.696: INFO: Pod "downwardapi-volume-79e9e09c-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097025434s
Jan 20 11:46:28.711: INFO: Pod "downwardapi-volume-79e9e09c-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11195351s
Jan 20 11:46:30.769: INFO: Pod "downwardapi-volume-79e9e09c-3b7a-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.169458378s
STEP: Saw pod success
Jan 20 11:46:30.769: INFO: Pod "downwardapi-volume-79e9e09c-3b7a-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 11:46:30.785: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-79e9e09c-3b7a-11ea-8bde-0242ac110005 container client-container: 
STEP: delete the pod
Jan 20 11:46:30.872: INFO: Waiting for pod downwardapi-volume-79e9e09c-3b7a-11ea-8bde-0242ac110005 to disappear
Jan 20 11:46:30.952: INFO: Pod downwardapi-volume-79e9e09c-3b7a-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:46:30.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-xjvcj" for this suite.
Jan 20 11:46:37.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:46:37.115: INFO: namespace: e2e-tests-downward-api-xjvcj, resource: bindings, ignored listing per whitelist
Jan 20 11:46:37.143: INFO: namespace e2e-tests-downward-api-xjvcj deletion completed in 6.179380988s

• [SLOW TEST:16.790 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:46:37.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-xnvp
STEP: Creating a pod to test atomic-volume-subpath
Jan 20 11:46:37.412: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xnvp" in namespace "e2e-tests-subpath-cg6pm" to be "success or failure"
Jan 20 11:46:37.437: INFO: Pod "pod-subpath-test-configmap-xnvp": Phase="Pending", Reason="", readiness=false. Elapsed: 24.755507ms
Jan 20 11:46:39.705: INFO: Pod "pod-subpath-test-configmap-xnvp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.293294992s
Jan 20 11:46:41.720: INFO: Pod "pod-subpath-test-configmap-xnvp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308473117s
Jan 20 11:46:43.756: INFO: Pod "pod-subpath-test-configmap-xnvp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.344146103s
Jan 20 11:46:45.963: INFO: Pod "pod-subpath-test-configmap-xnvp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.5510379s
Jan 20 11:46:47.975: INFO: Pod "pod-subpath-test-configmap-xnvp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.56337669s
Jan 20 11:46:50.377: INFO: Pod "pod-subpath-test-configmap-xnvp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.964825714s
Jan 20 11:46:52.432: INFO: Pod "pod-subpath-test-configmap-xnvp": Phase="Pending", Reason="", readiness=false. Elapsed: 15.019931297s
Jan 20 11:46:54.449: INFO: Pod "pod-subpath-test-configmap-xnvp": Phase="Running", Reason="", readiness=false. Elapsed: 17.036651048s
Jan 20 11:46:56.491: INFO: Pod "pod-subpath-test-configmap-xnvp": Phase="Running", Reason="", readiness=false. Elapsed: 19.078843115s
Jan 20 11:46:58.528: INFO: Pod "pod-subpath-test-configmap-xnvp": Phase="Running", Reason="", readiness=false. Elapsed: 21.115848948s
Jan 20 11:47:00.557: INFO: Pod "pod-subpath-test-configmap-xnvp": Phase="Running", Reason="", readiness=false. Elapsed: 23.145029812s
Jan 20 11:47:02.583: INFO: Pod "pod-subpath-test-configmap-xnvp": Phase="Running", Reason="", readiness=false. Elapsed: 25.171290006s
Jan 20 11:47:04.609: INFO: Pod "pod-subpath-test-configmap-xnvp": Phase="Running", Reason="", readiness=false. Elapsed: 27.197441141s
Jan 20 11:47:06.638: INFO: Pod "pod-subpath-test-configmap-xnvp": Phase="Running", Reason="", readiness=false. Elapsed: 29.22598519s
Jan 20 11:47:08.655: INFO: Pod "pod-subpath-test-configmap-xnvp": Phase="Running", Reason="", readiness=false. Elapsed: 31.243136594s
Jan 20 11:47:10.691: INFO: Pod "pod-subpath-test-configmap-xnvp": Phase="Running", Reason="", readiness=false. Elapsed: 33.278763031s
Jan 20 11:47:12.708: INFO: Pod "pod-subpath-test-configmap-xnvp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.296289963s
STEP: Saw pod success
Jan 20 11:47:12.708: INFO: Pod "pod-subpath-test-configmap-xnvp" satisfied condition "success or failure"
Jan 20 11:47:12.716: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-xnvp container test-container-subpath-configmap-xnvp: 
STEP: delete the pod
Jan 20 11:47:12.994: INFO: Waiting for pod pod-subpath-test-configmap-xnvp to disappear
Jan 20 11:47:13.015: INFO: Pod pod-subpath-test-configmap-xnvp no longer exists
STEP: Deleting pod pod-subpath-test-configmap-xnvp
Jan 20 11:47:13.016: INFO: Deleting pod "pod-subpath-test-configmap-xnvp" in namespace "e2e-tests-subpath-cg6pm"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:47:13.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-cg6pm" for this suite.
Jan 20 11:47:19.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:47:19.184: INFO: namespace: e2e-tests-subpath-cg6pm, resource: bindings, ignored listing per whitelist
Jan 20 11:47:19.206: INFO: namespace e2e-tests-subpath-cg6pm deletion completed in 6.176511133s

• [SLOW TEST:42.063 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:47:19.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0120 11:47:50.290019       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 20 11:47:50.290: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:47:50.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-nnwhv" for this suite.
Jan 20 11:48:00.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:48:00.667: INFO: namespace: e2e-tests-gc-nnwhv, resource: bindings, ignored listing per whitelist
Jan 20 11:48:00.695: INFO: namespace e2e-tests-gc-nnwhv deletion completed in 10.401340192s

• [SLOW TEST:41.488 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:48:00.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 20 11:48:01.029: INFO: Waiting up to 5m0s for pod "pod-b5bbe8c2-3b7a-11ea-8bde-0242ac110005" in namespace "e2e-tests-emptydir-9fwp8" to be "success or failure"
Jan 20 11:48:01.055: INFO: Pod "pod-b5bbe8c2-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 25.257306ms
Jan 20 11:48:03.065: INFO: Pod "pod-b5bbe8c2-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035773318s
Jan 20 11:48:05.094: INFO: Pod "pod-b5bbe8c2-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064810132s
Jan 20 11:48:07.144: INFO: Pod "pod-b5bbe8c2-3b7a-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114647273s
Jan 20 11:48:09.158: INFO: Pod "pod-b5bbe8c2-3b7a-11ea-8bde-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.128205732s
Jan 20 11:48:11.176: INFO: Pod "pod-b5bbe8c2-3b7a-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.14627561s
STEP: Saw pod success
Jan 20 11:48:11.176: INFO: Pod "pod-b5bbe8c2-3b7a-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 11:48:11.180: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b5bbe8c2-3b7a-11ea-8bde-0242ac110005 container test-container: 
STEP: delete the pod
Jan 20 11:48:11.252: INFO: Waiting for pod pod-b5bbe8c2-3b7a-11ea-8bde-0242ac110005 to disappear
Jan 20 11:48:11.286: INFO: Pod pod-b5bbe8c2-3b7a-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:48:11.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-9fwp8" for this suite.
Jan 20 11:48:17.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:48:17.499: INFO: namespace: e2e-tests-emptydir-9fwp8, resource: bindings, ignored listing per whitelist
Jan 20 11:48:17.619: INFO: namespace e2e-tests-emptydir-9fwp8 deletion completed in 6.279956175s

• [SLOW TEST:16.923 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:48:17.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-h7p8r
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 20 11:48:17.928: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 20 11:48:54.488: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-h7p8r PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 11:48:54.489: INFO: >>> kubeConfig: /root/.kube/config
I0120 11:48:54.736006       8 log.go:172] (0xc00093f080) (0xc001821cc0) Create stream
I0120 11:48:54.736421       8 log.go:172] (0xc00093f080) (0xc001821cc0) Stream added, broadcasting: 1
I0120 11:48:54.761492       8 log.go:172] (0xc00093f080) Reply frame received for 1
I0120 11:48:54.761544       8 log.go:172] (0xc00093f080) (0xc000e6cbe0) Create stream
I0120 11:48:54.761557       8 log.go:172] (0xc00093f080) (0xc000e6cbe0) Stream added, broadcasting: 3
I0120 11:48:54.762660       8 log.go:172] (0xc00093f080) Reply frame received for 3
I0120 11:48:54.762700       8 log.go:172] (0xc00093f080) (0xc001821d60) Create stream
I0120 11:48:54.762726       8 log.go:172] (0xc00093f080) (0xc001821d60) Stream added, broadcasting: 5
I0120 11:48:54.764356       8 log.go:172] (0xc00093f080) Reply frame received for 5
I0120 11:48:54.952226       8 log.go:172] (0xc00093f080) Data frame received for 3
I0120 11:48:54.952272       8 log.go:172] (0xc000e6cbe0) (3) Data frame handling
I0120 11:48:54.952302       8 log.go:172] (0xc000e6cbe0) (3) Data frame sent
I0120 11:48:55.066320       8 log.go:172] (0xc00093f080) (0xc001821d60) Stream removed, broadcasting: 5
I0120 11:48:55.066424       8 log.go:172] (0xc00093f080) Data frame received for 1
I0120 11:48:55.066438       8 log.go:172] (0xc001821cc0) (1) Data frame handling
I0120 11:48:55.066463       8 log.go:172] (0xc001821cc0) (1) Data frame sent
I0120 11:48:55.066515       8 log.go:172] (0xc00093f080) (0xc001821cc0) Stream removed, broadcasting: 1
I0120 11:48:55.066590       8 log.go:172] (0xc00093f080) (0xc000e6cbe0) Stream removed, broadcasting: 3
I0120 11:48:55.066625       8 log.go:172] (0xc00093f080) Go away received
I0120 11:48:55.066756       8 log.go:172] (0xc00093f080) (0xc001821cc0) Stream removed, broadcasting: 1
I0120 11:48:55.066768       8 log.go:172] (0xc00093f080) (0xc000e6cbe0) Stream removed, broadcasting: 3
I0120 11:48:55.066781       8 log.go:172] (0xc00093f080) (0xc001821d60) Stream removed, broadcasting: 5
Jan 20 11:48:55.066: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:48:55.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-h7p8r" for this suite.
Jan 20 11:49:21.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:49:21.166: INFO: namespace: e2e-tests-pod-network-test-h7p8r, resource: bindings, ignored listing per whitelist
Jan 20 11:49:21.347: INFO: namespace e2e-tests-pod-network-test-h7p8r deletion completed in 26.268352756s

• [SLOW TEST:63.728 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:49:21.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 20 11:49:21.570: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-56dxw,SelfLink:/api/v1/namespaces/e2e-tests-watch-56dxw/configmaps/e2e-watch-test-watch-closed,UID:e5cb43a3-3b7a-11ea-a994-fa163e34d433,ResourceVersion:18847090,Generation:0,CreationTimestamp:2020-01-20 11:49:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 20 11:49:21.570: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-56dxw,SelfLink:/api/v1/namespaces/e2e-tests-watch-56dxw/configmaps/e2e-watch-test-watch-closed,UID:e5cb43a3-3b7a-11ea-a994-fa163e34d433,ResourceVersion:18847091,Generation:0,CreationTimestamp:2020-01-20 11:49:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 20 11:49:21.632: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-56dxw,SelfLink:/api/v1/namespaces/e2e-tests-watch-56dxw/configmaps/e2e-watch-test-watch-closed,UID:e5cb43a3-3b7a-11ea-a994-fa163e34d433,ResourceVersion:18847092,Generation:0,CreationTimestamp:2020-01-20 11:49:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 20 11:49:21.632: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-56dxw,SelfLink:/api/v1/namespaces/e2e-tests-watch-56dxw/configmaps/e2e-watch-test-watch-closed,UID:e5cb43a3-3b7a-11ea-a994-fa163e34d433,ResourceVersion:18847093,Generation:0,CreationTimestamp:2020-01-20 11:49:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:49:21.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-56dxw" for this suite.
Jan 20 11:49:27.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:49:27.824: INFO: namespace: e2e-tests-watch-56dxw, resource: bindings, ignored listing per whitelist
Jan 20 11:49:27.841: INFO: namespace e2e-tests-watch-56dxw deletion completed in 6.20210372s

• [SLOW TEST:6.493 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:49:27.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-tdzwf
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Jan 20 11:49:28.069: INFO: Found 0 stateful pods, waiting for 3
Jan 20 11:49:38.095: INFO: Found 1 stateful pods, waiting for 3
Jan 20 11:49:48.098: INFO: Found 2 stateful pods, waiting for 3
Jan 20 11:49:58.087: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 11:49:58.088: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 11:49:58.088: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 20 11:50:08.085: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 11:50:08.085: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 11:50:08.085: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan 20 11:50:08.171: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 20 11:50:18.334: INFO: Updating stateful set ss2
Jan 20 11:50:18.361: INFO: Waiting for Pod e2e-tests-statefulset-tdzwf/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan 20 11:50:32.231: INFO: Found 2 stateful pods, waiting for 3
Jan 20 11:50:42.254: INFO: Found 2 stateful pods, waiting for 3
Jan 20 11:50:52.251: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 11:50:52.251: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 11:50:52.251: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 20 11:51:02.263: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 11:51:02.263: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 11:51:02.263: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 20 11:51:02.305: INFO: Updating stateful set ss2
Jan 20 11:51:02.459: INFO: Waiting for Pod e2e-tests-statefulset-tdzwf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 20 11:51:12.525: INFO: Waiting for Pod e2e-tests-statefulset-tdzwf/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 20 11:51:22.607: INFO: Updating stateful set ss2
Jan 20 11:51:22.740: INFO: Waiting for StatefulSet e2e-tests-statefulset-tdzwf/ss2 to complete update
Jan 20 11:51:22.740: INFO: Waiting for Pod e2e-tests-statefulset-tdzwf/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan 20 11:51:32.775: INFO: Waiting for StatefulSet e2e-tests-statefulset-tdzwf/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 20 11:51:42.785: INFO: Deleting all statefulset in ns e2e-tests-statefulset-tdzwf
Jan 20 11:51:42.794: INFO: Scaling statefulset ss2 to 0
Jan 20 11:52:12.882: INFO: Waiting for statefulset status.replicas updated to 0
Jan 20 11:52:12.894: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:52:12.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-tdzwf" for this suite.
Jan 20 11:52:21.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:52:21.174: INFO: namespace: e2e-tests-statefulset-tdzwf, resource: bindings, ignored listing per whitelist
Jan 20 11:52:21.284: INFO: namespace e2e-tests-statefulset-tdzwf deletion completed in 8.269870033s

• [SLOW TEST:173.443 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:52:21.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 20 11:52:21.484: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5104f43c-3b7b-11ea-8bde-0242ac110005" in namespace "e2e-tests-downward-api-f7tc2" to be "success or failure"
Jan 20 11:52:21.490: INFO: Pod "downwardapi-volume-5104f43c-3b7b-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.1707ms
Jan 20 11:52:23.515: INFO: Pod "downwardapi-volume-5104f43c-3b7b-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030590102s
Jan 20 11:52:25.542: INFO: Pod "downwardapi-volume-5104f43c-3b7b-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058049107s
Jan 20 11:52:27.725: INFO: Pod "downwardapi-volume-5104f43c-3b7b-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.24118013s
Jan 20 11:52:29.745: INFO: Pod "downwardapi-volume-5104f43c-3b7b-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.26101698s
Jan 20 11:52:31.762: INFO: Pod "downwardapi-volume-5104f43c-3b7b-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.277524281s
STEP: Saw pod success
Jan 20 11:52:31.762: INFO: Pod "downwardapi-volume-5104f43c-3b7b-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 11:52:31.768: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5104f43c-3b7b-11ea-8bde-0242ac110005 container client-container: 
STEP: delete the pod
Jan 20 11:52:32.372: INFO: Waiting for pod downwardapi-volume-5104f43c-3b7b-11ea-8bde-0242ac110005 to disappear
Jan 20 11:52:32.450: INFO: Pod downwardapi-volume-5104f43c-3b7b-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:52:32.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-f7tc2" for this suite.
Jan 20 11:52:38.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:52:38.739: INFO: namespace: e2e-tests-downward-api-f7tc2, resource: bindings, ignored listing per whitelist
Jan 20 11:52:38.807: INFO: namespace e2e-tests-downward-api-f7tc2 deletion completed in 6.265451732s

• [SLOW TEST:17.523 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:52:38.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-5b84d26e-3b7b-11ea-8bde-0242ac110005
STEP: Creating secret with name s-test-opt-upd-5b84d3d5-3b7b-11ea-8bde-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-5b84d26e-3b7b-11ea-8bde-0242ac110005
STEP: Updating secret s-test-opt-upd-5b84d3d5-3b7b-11ea-8bde-0242ac110005
STEP: Creating secret with name s-test-opt-create-5b84d43b-3b7b-11ea-8bde-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:54:12.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-zhwpm" for this suite.
Jan 20 11:54:36.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:54:36.269: INFO: namespace: e2e-tests-secrets-zhwpm, resource: bindings, ignored listing per whitelist
Jan 20 11:54:36.430: INFO: namespace e2e-tests-secrets-zhwpm deletion completed in 24.220414921s

• [SLOW TEST:117.623 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:54:36.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jan 20 11:54:36.803: INFO: Waiting up to 5m0s for pod "client-containers-a1adecb3-3b7b-11ea-8bde-0242ac110005" in namespace "e2e-tests-containers-m9cgd" to be "success or failure"
Jan 20 11:54:36.846: INFO: Pod "client-containers-a1adecb3-3b7b-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 42.757204ms
Jan 20 11:54:38.871: INFO: Pod "client-containers-a1adecb3-3b7b-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067282354s
Jan 20 11:54:40.892: INFO: Pod "client-containers-a1adecb3-3b7b-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08894914s
Jan 20 11:54:42.911: INFO: Pod "client-containers-a1adecb3-3b7b-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107536909s
Jan 20 11:54:45.406: INFO: Pod "client-containers-a1adecb3-3b7b-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.60221504s
Jan 20 11:54:47.423: INFO: Pod "client-containers-a1adecb3-3b7b-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.619262718s
STEP: Saw pod success
Jan 20 11:54:47.423: INFO: Pod "client-containers-a1adecb3-3b7b-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 11:54:47.433: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-containers-a1adecb3-3b7b-11ea-8bde-0242ac110005 container test-container: 
STEP: delete the pod
Jan 20 11:54:47.569: INFO: Waiting for pod client-containers-a1adecb3-3b7b-11ea-8bde-0242ac110005 to disappear
Jan 20 11:54:47.607: INFO: Pod client-containers-a1adecb3-3b7b-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:54:47.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-m9cgd" for this suite.
Jan 20 11:54:53.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:54:53.788: INFO: namespace: e2e-tests-containers-m9cgd, resource: bindings, ignored listing per whitelist
Jan 20 11:54:54.040: INFO: namespace e2e-tests-containers-m9cgd deletion completed in 6.423900364s

• [SLOW TEST:17.609 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:54:54.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 20 11:54:54.427: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-wvskl,SelfLink:/api/v1/namespaces/e2e-tests-watch-wvskl/configmaps/e2e-watch-test-resource-version,UID:ac17ff3b-3b7b-11ea-a994-fa163e34d433,ResourceVersion:18847841,Generation:0,CreationTimestamp:2020-01-20 11:54:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 20 11:54:54.427: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-wvskl,SelfLink:/api/v1/namespaces/e2e-tests-watch-wvskl/configmaps/e2e-watch-test-resource-version,UID:ac17ff3b-3b7b-11ea-a994-fa163e34d433,ResourceVersion:18847842,Generation:0,CreationTimestamp:2020-01-20 11:54:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:54:54.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-wvskl" for this suite.
Jan 20 11:55:00.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:55:00.536: INFO: namespace: e2e-tests-watch-wvskl, resource: bindings, ignored listing per whitelist
Jan 20 11:55:00.695: INFO: namespace e2e-tests-watch-wvskl deletion completed in 6.261952786s

• [SLOW TEST:6.655 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:55:00.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 20 11:55:01.098: INFO: Number of nodes with available pods: 0
Jan 20 11:55:01.099: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:02.250: INFO: Number of nodes with available pods: 0
Jan 20 11:55:02.250: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:03.204: INFO: Number of nodes with available pods: 0
Jan 20 11:55:03.204: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:04.127: INFO: Number of nodes with available pods: 0
Jan 20 11:55:04.127: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:05.134: INFO: Number of nodes with available pods: 0
Jan 20 11:55:05.134: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:06.433: INFO: Number of nodes with available pods: 0
Jan 20 11:55:06.433: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:07.118: INFO: Number of nodes with available pods: 0
Jan 20 11:55:07.118: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:08.141: INFO: Number of nodes with available pods: 0
Jan 20 11:55:08.141: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:09.138: INFO: Number of nodes with available pods: 1
Jan 20 11:55:09.138: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 20 11:55:09.299: INFO: Number of nodes with available pods: 0
Jan 20 11:55:09.299: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:10.322: INFO: Number of nodes with available pods: 0
Jan 20 11:55:10.323: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:11.335: INFO: Number of nodes with available pods: 0
Jan 20 11:55:11.335: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:12.358: INFO: Number of nodes with available pods: 0
Jan 20 11:55:12.358: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:13.329: INFO: Number of nodes with available pods: 0
Jan 20 11:55:13.329: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:14.424: INFO: Number of nodes with available pods: 0
Jan 20 11:55:14.424: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:15.322: INFO: Number of nodes with available pods: 0
Jan 20 11:55:15.322: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:16.348: INFO: Number of nodes with available pods: 0
Jan 20 11:55:16.348: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:17.331: INFO: Number of nodes with available pods: 0
Jan 20 11:55:17.331: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:18.330: INFO: Number of nodes with available pods: 0
Jan 20 11:55:18.331: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:19.370: INFO: Number of nodes with available pods: 0
Jan 20 11:55:19.370: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:20.325: INFO: Number of nodes with available pods: 0
Jan 20 11:55:20.325: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:21.326: INFO: Number of nodes with available pods: 0
Jan 20 11:55:21.326: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:22.326: INFO: Number of nodes with available pods: 0
Jan 20 11:55:22.326: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:23.336: INFO: Number of nodes with available pods: 0
Jan 20 11:55:23.337: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:24.390: INFO: Number of nodes with available pods: 0
Jan 20 11:55:24.390: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:25.335: INFO: Number of nodes with available pods: 0
Jan 20 11:55:25.335: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:26.350: INFO: Number of nodes with available pods: 0
Jan 20 11:55:26.350: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:27.334: INFO: Number of nodes with available pods: 0
Jan 20 11:55:27.334: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:28.343: INFO: Number of nodes with available pods: 0
Jan 20 11:55:28.343: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:29.955: INFO: Number of nodes with available pods: 0
Jan 20 11:55:29.955: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:30.347: INFO: Number of nodes with available pods: 0
Jan 20 11:55:30.347: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:31.364: INFO: Number of nodes with available pods: 0
Jan 20 11:55:31.364: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 11:55:32.471: INFO: Number of nodes with available pods: 1
Jan 20 11:55:32.471: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-56mnw, will wait for the garbage collector to delete the pods
Jan 20 11:55:32.642: INFO: Deleting DaemonSet.extensions daemon-set took: 88.450854ms
Jan 20 11:55:32.742: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.387712ms
Jan 20 11:55:42.719: INFO: Number of nodes with available pods: 0
Jan 20 11:55:42.720: INFO: Number of running nodes: 0, number of available pods: 0
Jan 20 11:55:42.752: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-56mnw/daemonsets","resourceVersion":"18847945"},"items":null}

Jan 20 11:55:42.766: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-56mnw/pods","resourceVersion":"18847945"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:55:42.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-56mnw" for this suite.
Jan 20 11:55:48.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:55:49.023: INFO: namespace: e2e-tests-daemonsets-56mnw, resource: bindings, ignored listing per whitelist
Jan 20 11:55:49.064: INFO: namespace e2e-tests-daemonsets-56mnw deletion completed in 6.244292792s

• [SLOW TEST:48.368 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:55:49.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 20 11:55:49.254: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-5zmqm,SelfLink:/api/v1/namespaces/e2e-tests-watch-5zmqm/configmaps/e2e-watch-test-label-changed,UID:ccde7a64-3b7b-11ea-a994-fa163e34d433,ResourceVersion:18847975,Generation:0,CreationTimestamp:2020-01-20 11:55:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 20 11:55:49.254: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-5zmqm,SelfLink:/api/v1/namespaces/e2e-tests-watch-5zmqm/configmaps/e2e-watch-test-label-changed,UID:ccde7a64-3b7b-11ea-a994-fa163e34d433,ResourceVersion:18847976,Generation:0,CreationTimestamp:2020-01-20 11:55:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 20 11:55:49.255: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-5zmqm,SelfLink:/api/v1/namespaces/e2e-tests-watch-5zmqm/configmaps/e2e-watch-test-label-changed,UID:ccde7a64-3b7b-11ea-a994-fa163e34d433,ResourceVersion:18847977,Generation:0,CreationTimestamp:2020-01-20 11:55:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 20 11:55:59.447: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-5zmqm,SelfLink:/api/v1/namespaces/e2e-tests-watch-5zmqm/configmaps/e2e-watch-test-label-changed,UID:ccde7a64-3b7b-11ea-a994-fa163e34d433,ResourceVersion:18847991,Generation:0,CreationTimestamp:2020-01-20 11:55:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 20 11:55:59.448: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-5zmqm,SelfLink:/api/v1/namespaces/e2e-tests-watch-5zmqm/configmaps/e2e-watch-test-label-changed,UID:ccde7a64-3b7b-11ea-a994-fa163e34d433,ResourceVersion:18847992,Generation:0,CreationTimestamp:2020-01-20 11:55:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 20 11:55:59.448: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-5zmqm,SelfLink:/api/v1/namespaces/e2e-tests-watch-5zmqm/configmaps/e2e-watch-test-label-changed,UID:ccde7a64-3b7b-11ea-a994-fa163e34d433,ResourceVersion:18847993,Generation:0,CreationTimestamp:2020-01-20 11:55:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:55:59.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-5zmqm" for this suite.
Jan 20 11:56:05.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:56:05.641: INFO: namespace: e2e-tests-watch-5zmqm, resource: bindings, ignored listing per whitelist
Jan 20 11:56:05.641: INFO: namespace e2e-tests-watch-5zmqm deletion completed in 6.187186283s

• [SLOW TEST:16.577 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:56:05.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jan 20 11:56:05.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-gx6sw'
Jan 20 11:56:08.362: INFO: stderr: ""
Jan 20 11:56:08.362: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan 20 11:56:09.381: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 11:56:09.381: INFO: Found 0 / 1
Jan 20 11:56:10.381: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 11:56:10.381: INFO: Found 0 / 1
Jan 20 11:56:11.373: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 11:56:11.373: INFO: Found 0 / 1
Jan 20 11:56:12.377: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 11:56:12.377: INFO: Found 0 / 1
Jan 20 11:56:13.484: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 11:56:13.484: INFO: Found 0 / 1
Jan 20 11:56:14.378: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 11:56:14.378: INFO: Found 0 / 1
Jan 20 11:56:15.370: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 11:56:15.370: INFO: Found 0 / 1
Jan 20 11:56:16.374: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 11:56:16.374: INFO: Found 1 / 1
Jan 20 11:56:16.374: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan 20 11:56:16.383: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 11:56:16.383: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 20 11:56:16.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-qf5v5 --namespace=e2e-tests-kubectl-gx6sw -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 20 11:56:16.690: INFO: stderr: ""
Jan 20 11:56:16.690: INFO: stdout: "pod/redis-master-qf5v5 patched\n"
STEP: checking annotations
Jan 20 11:56:16.700: INFO: Selector matched 1 pods for map[app:redis]
Jan 20 11:56:16.700: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:56:16.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-gx6sw" for this suite.
Jan 20 11:56:40.743: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:56:40.790: INFO: namespace: e2e-tests-kubectl-gx6sw, resource: bindings, ignored listing per whitelist
Jan 20 11:56:40.960: INFO: namespace e2e-tests-kubectl-gx6sw deletion completed in 24.25482151s

• [SLOW TEST:35.318 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:56:40.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 20 11:56:41.183: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ebd1dabe-3b7b-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-whfjp" to be "success or failure"
Jan 20 11:56:41.192: INFO: Pod "downwardapi-volume-ebd1dabe-3b7b-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.559492ms
Jan 20 11:56:43.259: INFO: Pod "downwardapi-volume-ebd1dabe-3b7b-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075419307s
Jan 20 11:56:45.297: INFO: Pod "downwardapi-volume-ebd1dabe-3b7b-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113119199s
Jan 20 11:56:47.325: INFO: Pod "downwardapi-volume-ebd1dabe-3b7b-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141671413s
Jan 20 11:56:49.347: INFO: Pod "downwardapi-volume-ebd1dabe-3b7b-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.163741781s
Jan 20 11:56:51.360: INFO: Pod "downwardapi-volume-ebd1dabe-3b7b-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.175958978s
STEP: Saw pod success
Jan 20 11:56:51.360: INFO: Pod "downwardapi-volume-ebd1dabe-3b7b-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 11:56:51.365: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-ebd1dabe-3b7b-11ea-8bde-0242ac110005 container client-container: 
STEP: delete the pod
Jan 20 11:56:52.089: INFO: Waiting for pod downwardapi-volume-ebd1dabe-3b7b-11ea-8bde-0242ac110005 to disappear
Jan 20 11:56:52.141: INFO: Pod downwardapi-volume-ebd1dabe-3b7b-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:56:52.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-whfjp" for this suite.
Jan 20 11:56:58.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:56:58.632: INFO: namespace: e2e-tests-projected-whfjp, resource: bindings, ignored listing per whitelist
Jan 20 11:56:58.665: INFO: namespace e2e-tests-projected-whfjp deletion completed in 6.504205731s

• [SLOW TEST:17.704 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:56:58.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-pqxf
STEP: Creating a pod to test atomic-volume-subpath
Jan 20 11:56:58.875: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-pqxf" in namespace "e2e-tests-subpath-zqbxj" to be "success or failure"
Jan 20 11:56:58.967: INFO: Pod "pod-subpath-test-configmap-pqxf": Phase="Pending", Reason="", readiness=false. Elapsed: 92.003518ms
Jan 20 11:57:00.979: INFO: Pod "pod-subpath-test-configmap-pqxf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104157405s
Jan 20 11:57:03.003: INFO: Pod "pod-subpath-test-configmap-pqxf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127757597s
Jan 20 11:57:05.141: INFO: Pod "pod-subpath-test-configmap-pqxf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.265743497s
Jan 20 11:57:07.206: INFO: Pod "pod-subpath-test-configmap-pqxf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.330637238s
Jan 20 11:57:09.217: INFO: Pod "pod-subpath-test-configmap-pqxf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.342126696s
Jan 20 11:57:11.754: INFO: Pod "pod-subpath-test-configmap-pqxf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.879120949s
Jan 20 11:57:13.794: INFO: Pod "pod-subpath-test-configmap-pqxf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.919475904s
Jan 20 11:57:15.814: INFO: Pod "pod-subpath-test-configmap-pqxf": Phase="Running", Reason="", readiness=false. Elapsed: 16.93904128s
Jan 20 11:57:17.836: INFO: Pod "pod-subpath-test-configmap-pqxf": Phase="Running", Reason="", readiness=false. Elapsed: 18.961087897s
Jan 20 11:57:19.882: INFO: Pod "pod-subpath-test-configmap-pqxf": Phase="Running", Reason="", readiness=false. Elapsed: 21.007167401s
Jan 20 11:57:21.902: INFO: Pod "pod-subpath-test-configmap-pqxf": Phase="Running", Reason="", readiness=false. Elapsed: 23.02760797s
Jan 20 11:57:23.984: INFO: Pod "pod-subpath-test-configmap-pqxf": Phase="Running", Reason="", readiness=false. Elapsed: 25.108702625s
Jan 20 11:57:25.999: INFO: Pod "pod-subpath-test-configmap-pqxf": Phase="Running", Reason="", readiness=false. Elapsed: 27.123888538s
Jan 20 11:57:28.022: INFO: Pod "pod-subpath-test-configmap-pqxf": Phase="Running", Reason="", readiness=false. Elapsed: 29.146762681s
Jan 20 11:57:30.040: INFO: Pod "pod-subpath-test-configmap-pqxf": Phase="Running", Reason="", readiness=false. Elapsed: 31.164872591s
Jan 20 11:57:32.069: INFO: Pod "pod-subpath-test-configmap-pqxf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.193722222s
STEP: Saw pod success
Jan 20 11:57:32.069: INFO: Pod "pod-subpath-test-configmap-pqxf" satisfied condition "success or failure"
Jan 20 11:57:32.078: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-configmap-pqxf container test-container-subpath-configmap-pqxf: 
STEP: delete the pod
Jan 20 11:57:32.287: INFO: Waiting for pod pod-subpath-test-configmap-pqxf to disappear
Jan 20 11:57:32.297: INFO: Pod pod-subpath-test-configmap-pqxf no longer exists
STEP: Deleting pod pod-subpath-test-configmap-pqxf
Jan 20 11:57:32.297: INFO: Deleting pod "pod-subpath-test-configmap-pqxf" in namespace "e2e-tests-subpath-zqbxj"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:57:32.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-zqbxj" for this suite.
Jan 20 11:57:38.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:57:38.527: INFO: namespace: e2e-tests-subpath-zqbxj, resource: bindings, ignored listing per whitelist
Jan 20 11:57:38.656: INFO: namespace e2e-tests-subpath-zqbxj deletion completed in 6.340773718s

• [SLOW TEST:39.991 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:57:38.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-0e276027-3b7c-11ea-8bde-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 20 11:57:38.812: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0e2c3057-3b7c-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-pfl6b" to be "success or failure"
Jan 20 11:57:38.864: INFO: Pod "pod-projected-secrets-0e2c3057-3b7c-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 51.462279ms
Jan 20 11:57:40.965: INFO: Pod "pod-projected-secrets-0e2c3057-3b7c-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1524525s
Jan 20 11:57:42.991: INFO: Pod "pod-projected-secrets-0e2c3057-3b7c-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178920239s
Jan 20 11:57:45.017: INFO: Pod "pod-projected-secrets-0e2c3057-3b7c-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205070181s
Jan 20 11:57:47.042: INFO: Pod "pod-projected-secrets-0e2c3057-3b7c-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.229592267s
Jan 20 11:57:49.059: INFO: Pod "pod-projected-secrets-0e2c3057-3b7c-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.246689818s
STEP: Saw pod success
Jan 20 11:57:49.059: INFO: Pod "pod-projected-secrets-0e2c3057-3b7c-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 11:57:49.065: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-0e2c3057-3b7c-11ea-8bde-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 20 11:57:49.570: INFO: Waiting for pod pod-projected-secrets-0e2c3057-3b7c-11ea-8bde-0242ac110005 to disappear
Jan 20 11:57:49.583: INFO: Pod pod-projected-secrets-0e2c3057-3b7c-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:57:49.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pfl6b" for this suite.
Jan 20 11:57:55.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:57:55.698: INFO: namespace: e2e-tests-projected-pfl6b, resource: bindings, ignored listing per whitelist
Jan 20 11:57:55.772: INFO: namespace e2e-tests-projected-pfl6b deletion completed in 6.179205575s

• [SLOW TEST:17.115 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:57:55.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:58:07.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-m2hg8" for this suite.
Jan 20 11:58:31.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:58:31.296: INFO: namespace: e2e-tests-replication-controller-m2hg8, resource: bindings, ignored listing per whitelist
Jan 20 11:58:31.305: INFO: namespace e2e-tests-replication-controller-m2hg8 deletion completed in 24.220436586s

• [SLOW TEST:35.533 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:58:31.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jan 20 11:58:31.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-qn52b'
Jan 20 11:58:32.375: INFO: stderr: ""
Jan 20 11:58:32.375: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 20 11:58:32.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qn52b'
Jan 20 11:58:32.707: INFO: stderr: ""
Jan 20 11:58:32.708: INFO: stdout: "update-demo-nautilus-62sss update-demo-nautilus-zl4ns "
Jan 20 11:58:32.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-62sss -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qn52b'
Jan 20 11:58:32.890: INFO: stderr: ""
Jan 20 11:58:32.890: INFO: stdout: ""
Jan 20 11:58:32.890: INFO: update-demo-nautilus-62sss is created but not running
Jan 20 11:58:37.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qn52b'
Jan 20 11:58:38.064: INFO: stderr: ""
Jan 20 11:58:38.065: INFO: stdout: "update-demo-nautilus-62sss update-demo-nautilus-zl4ns "
Jan 20 11:58:38.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-62sss -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qn52b'
Jan 20 11:58:38.216: INFO: stderr: ""
Jan 20 11:58:38.216: INFO: stdout: ""
Jan 20 11:58:38.216: INFO: update-demo-nautilus-62sss is created but not running
Jan 20 11:58:43.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qn52b'
Jan 20 11:58:43.406: INFO: stderr: ""
Jan 20 11:58:43.406: INFO: stdout: "update-demo-nautilus-62sss update-demo-nautilus-zl4ns "
Jan 20 11:58:43.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-62sss -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qn52b'
Jan 20 11:58:43.560: INFO: stderr: ""
Jan 20 11:58:43.560: INFO: stdout: ""
Jan 20 11:58:43.560: INFO: update-demo-nautilus-62sss is created but not running
Jan 20 11:58:48.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qn52b'
Jan 20 11:58:48.733: INFO: stderr: ""
Jan 20 11:58:48.733: INFO: stdout: "update-demo-nautilus-62sss update-demo-nautilus-zl4ns "
Jan 20 11:58:48.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-62sss -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qn52b'
Jan 20 11:58:48.884: INFO: stderr: ""
Jan 20 11:58:48.884: INFO: stdout: "true"
Jan 20 11:58:48.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-62sss -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qn52b'
Jan 20 11:58:48.981: INFO: stderr: ""
Jan 20 11:58:48.981: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 20 11:58:48.981: INFO: validating pod update-demo-nautilus-62sss
Jan 20 11:58:48.999: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 20 11:58:48.999: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 20 11:58:48.999: INFO: update-demo-nautilus-62sss is verified up and running
Jan 20 11:58:48.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zl4ns -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qn52b'
Jan 20 11:58:49.117: INFO: stderr: ""
Jan 20 11:58:49.117: INFO: stdout: "true"
Jan 20 11:58:49.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zl4ns -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qn52b'
Jan 20 11:58:49.284: INFO: stderr: ""
Jan 20 11:58:49.284: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 20 11:58:49.284: INFO: validating pod update-demo-nautilus-zl4ns
Jan 20 11:58:49.296: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 20 11:58:49.296: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 20 11:58:49.296: INFO: update-demo-nautilus-zl4ns is verified up and running
STEP: rolling-update to new replication controller
Jan 20 11:58:49.299: INFO: scanned /root for discovery docs: 
Jan 20 11:58:49.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-qn52b'
Jan 20 11:59:24.088: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 20 11:59:24.088: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 20 11:59:24.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-qn52b'
Jan 20 11:59:24.283: INFO: stderr: ""
Jan 20 11:59:24.283: INFO: stdout: "update-demo-kitten-hsx6j update-demo-kitten-kp22l "
Jan 20 11:59:24.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hsx6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qn52b'
Jan 20 11:59:24.392: INFO: stderr: ""
Jan 20 11:59:24.392: INFO: stdout: "true"
Jan 20 11:59:24.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hsx6j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qn52b'
Jan 20 11:59:24.522: INFO: stderr: ""
Jan 20 11:59:24.522: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 20 11:59:24.522: INFO: validating pod update-demo-kitten-hsx6j
Jan 20 11:59:24.563: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 20 11:59:24.563: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 20 11:59:24.563: INFO: update-demo-kitten-hsx6j is verified up and running
Jan 20 11:59:24.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kp22l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qn52b'
Jan 20 11:59:24.656: INFO: stderr: ""
Jan 20 11:59:24.656: INFO: stdout: "true"
Jan 20 11:59:24.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kp22l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-qn52b'
Jan 20 11:59:24.765: INFO: stderr: ""
Jan 20 11:59:24.765: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 20 11:59:24.765: INFO: validating pod update-demo-kitten-kp22l
Jan 20 11:59:24.776: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 20 11:59:24.776: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 20 11:59:24.776: INFO: update-demo-kitten-kp22l is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 11:59:24.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-qn52b" for this suite.
Jan 20 11:59:58.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 11:59:59.082: INFO: namespace: e2e-tests-kubectl-qn52b, resource: bindings, ignored listing per whitelist
Jan 20 11:59:59.082: INFO: namespace e2e-tests-kubectl-qn52b deletion completed in 34.300186487s

• [SLOW TEST:87.777 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 11:59:59.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 20 11:59:59.259: INFO: Waiting up to 5m0s for pod "pod-61e35b20-3b7c-11ea-8bde-0242ac110005" in namespace "e2e-tests-emptydir-9qms8" to be "success or failure"
Jan 20 11:59:59.324: INFO: Pod "pod-61e35b20-3b7c-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 65.380889ms
Jan 20 12:00:01.612: INFO: Pod "pod-61e35b20-3b7c-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.353243702s
Jan 20 12:00:03.636: INFO: Pod "pod-61e35b20-3b7c-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.377244838s
Jan 20 12:00:05.655: INFO: Pod "pod-61e35b20-3b7c-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.396654185s
Jan 20 12:00:08.031: INFO: Pod "pod-61e35b20-3b7c-11ea-8bde-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.772485789s
Jan 20 12:00:10.044: INFO: Pod "pod-61e35b20-3b7c-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.785608494s
STEP: Saw pod success
Jan 20 12:00:10.045: INFO: Pod "pod-61e35b20-3b7c-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:00:10.048: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-61e35b20-3b7c-11ea-8bde-0242ac110005 container test-container: 
STEP: delete the pod
Jan 20 12:00:10.246: INFO: Waiting for pod pod-61e35b20-3b7c-11ea-8bde-0242ac110005 to disappear
Jan 20 12:00:10.266: INFO: Pod pod-61e35b20-3b7c-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:00:10.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-9qms8" for this suite.
Jan 20 12:00:17.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:00:17.725: INFO: namespace: e2e-tests-emptydir-9qms8, resource: bindings, ignored listing per whitelist
Jan 20 12:00:17.777: INFO: namespace e2e-tests-emptydir-9qms8 deletion completed in 7.500838779s

• [SLOW TEST:18.694 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:00:17.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-s2cbw
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-s2cbw to expose endpoints map[]
Jan 20 12:00:18.043: INFO: Get endpoints failed (9.590025ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan 20 12:00:19.060: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-s2cbw exposes endpoints map[] (1.02676933s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-s2cbw
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-s2cbw to expose endpoints map[pod1:[80]]
Jan 20 12:00:23.398: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.290353696s elapsed, will retry)
Jan 20 12:00:27.888: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-s2cbw exposes endpoints map[pod1:[80]] (8.780343744s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-s2cbw
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-s2cbw to expose endpoints map[pod1:[80] pod2:[80]]
Jan 20 12:00:32.774: INFO: Unexpected endpoints: found map[6db61d51-3b7c-11ea-a994-fa163e34d433:[80]], expected map[pod1:[80] pod2:[80]] (4.869374386s elapsed, will retry)
Jan 20 12:00:36.936: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-s2cbw exposes endpoints map[pod1:[80] pod2:[80]] (9.030770646s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-s2cbw
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-s2cbw to expose endpoints map[pod2:[80]]
Jan 20 12:00:38.068: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-s2cbw exposes endpoints map[pod2:[80]] (1.108449782s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-s2cbw
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-s2cbw to expose endpoints map[]
Jan 20 12:00:40.112: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-s2cbw exposes endpoints map[] (2.03084519s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:00:40.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-s2cbw" for this suite.
Jan 20 12:01:04.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:01:04.979: INFO: namespace: e2e-tests-services-s2cbw, resource: bindings, ignored listing per whitelist
Jan 20 12:01:05.029: INFO: namespace e2e-tests-services-s2cbw deletion completed in 24.222835039s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:47.252 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:01:05.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-8933d18e-3b7c-11ea-8bde-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 20 12:01:05.239: INFO: Waiting up to 5m0s for pod "pod-configmaps-8935376a-3b7c-11ea-8bde-0242ac110005" in namespace "e2e-tests-configmap-mkfwc" to be "success or failure"
Jan 20 12:01:05.250: INFO: Pod "pod-configmaps-8935376a-3b7c-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.217318ms
Jan 20 12:01:07.270: INFO: Pod "pod-configmaps-8935376a-3b7c-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030455301s
Jan 20 12:01:09.287: INFO: Pod "pod-configmaps-8935376a-3b7c-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047093055s
Jan 20 12:01:11.655: INFO: Pod "pod-configmaps-8935376a-3b7c-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.415424773s
Jan 20 12:01:13.713: INFO: Pod "pod-configmaps-8935376a-3b7c-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.473834023s
Jan 20 12:01:16.344: INFO: Pod "pod-configmaps-8935376a-3b7c-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.104565309s
STEP: Saw pod success
Jan 20 12:01:16.344: INFO: Pod "pod-configmaps-8935376a-3b7c-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:01:16.674: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-8935376a-3b7c-11ea-8bde-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 20 12:01:16.902: INFO: Waiting for pod pod-configmaps-8935376a-3b7c-11ea-8bde-0242ac110005 to disappear
Jan 20 12:01:16.929: INFO: Pod pod-configmaps-8935376a-3b7c-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:01:16.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-mkfwc" for this suite.
Jan 20 12:01:23.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:01:23.234: INFO: namespace: e2e-tests-configmap-mkfwc, resource: bindings, ignored listing per whitelist
Jan 20 12:01:23.234: INFO: namespace e2e-tests-configmap-mkfwc deletion completed in 6.292708555s

• [SLOW TEST:18.205 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:01:23.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 20 12:01:43.918: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 20 12:01:43.965: INFO: Pod pod-with-poststart-http-hook still exists
Jan 20 12:01:45.965: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 20 12:01:45.983: INFO: Pod pod-with-poststart-http-hook still exists
Jan 20 12:01:47.965: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 20 12:01:47.991: INFO: Pod pod-with-poststart-http-hook still exists
Jan 20 12:01:49.965: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 20 12:01:49.979: INFO: Pod pod-with-poststart-http-hook still exists
Jan 20 12:01:51.965: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 20 12:01:52.029: INFO: Pod pod-with-poststart-http-hook still exists
Jan 20 12:01:53.965: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 20 12:01:54.024: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:01:54.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-9xtph" for this suite.
Jan 20 12:02:18.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:02:18.297: INFO: namespace: e2e-tests-container-lifecycle-hook-9xtph, resource: bindings, ignored listing per whitelist
Jan 20 12:02:18.335: INFO: namespace e2e-tests-container-lifecycle-hook-9xtph deletion completed in 24.298157781s

• [SLOW TEST:55.099 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:02:18.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 20 12:02:18.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:02:29.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-5xnjh" for this suite.
Jan 20 12:03:13.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:03:13.480: INFO: namespace: e2e-tests-pods-5xnjh, resource: bindings, ignored listing per whitelist
Jan 20 12:03:13.630: INFO: namespace e2e-tests-pods-5xnjh deletion completed in 44.425531484s

• [SLOW TEST:55.295 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:03:13.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:03:22.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-h7hjc" for this suite.
Jan 20 12:04:16.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:04:16.591: INFO: namespace: e2e-tests-kubelet-test-h7hjc, resource: bindings, ignored listing per whitelist
Jan 20 12:04:16.705: INFO: namespace e2e-tests-kubelet-test-h7hjc deletion completed in 54.421517247s

• [SLOW TEST:63.075 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:04:16.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-xbmhd.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-xbmhd.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-xbmhd.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-xbmhd.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-xbmhd.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-xbmhd.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 20 12:04:30.959: INFO: Unable to read wheezy_udp@kubernetes.default from pod e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005)
Jan 20 12:04:30.967: INFO: Unable to read wheezy_tcp@kubernetes.default from pod e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005)
Jan 20 12:04:30.975: INFO: Unable to read wheezy_udp@kubernetes.default.svc from pod e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005)
Jan 20 12:04:30.995: INFO: Unable to read wheezy_tcp@kubernetes.default.svc from pod e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005)
Jan 20 12:04:31.006: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005)
Jan 20 12:04:31.020: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005)
Jan 20 12:04:31.027: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-xbmhd.svc.cluster.local from pod e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005)
Jan 20 12:04:31.037: INFO: Unable to read wheezy_hosts@dns-querier-1 from pod e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005)
Jan 20 12:04:31.041: INFO: Unable to read wheezy_udp@PodARecord from pod e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005)
Jan 20 12:04:31.044: INFO: Unable to read wheezy_tcp@PodARecord from pod e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005)
Jan 20 12:04:31.048: INFO: Unable to read jessie_udp@kubernetes.default from pod e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005)
Jan 20 12:04:31.051: INFO: Unable to read jessie_tcp@kubernetes.default from pod e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005)
Jan 20 12:04:31.056: INFO: Unable to read jessie_udp@kubernetes.default.svc from pod e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005)
Jan 20 12:04:31.060: INFO: Unable to read jessie_tcp@kubernetes.default.svc from pod e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005)
Jan 20 12:04:31.063: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005)
Jan 20 12:04:31.066: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005)
Jan 20 12:04:31.068: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-xbmhd.svc.cluster.local from pod e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005)
Jan 20 12:04:31.072: INFO: Unable to read jessie_hosts@dns-querier-1 from pod e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005)
Jan 20 12:04:31.075: INFO: Unable to read jessie_udp@PodARecord from pod e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005)
Jan 20 12:04:31.081: INFO: Unable to read jessie_tcp@PodARecord from pod e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005: the server could not find the requested resource (get pods dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005)
Jan 20 12:04:31.081: INFO: Lookups using e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005 failed for: [wheezy_udp@kubernetes.default wheezy_tcp@kubernetes.default wheezy_udp@kubernetes.default.svc wheezy_tcp@kubernetes.default.svc wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-xbmhd.svc.cluster.local wheezy_hosts@dns-querier-1 wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default jessie_tcp@kubernetes.default jessie_udp@kubernetes.default.svc jessie_tcp@kubernetes.default.svc jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-xbmhd.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan 20 12:04:36.181: INFO: DNS probes using e2e-tests-dns-xbmhd/dns-test-fb689cb4-3b7c-11ea-8bde-0242ac110005 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:04:36.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-xbmhd" for this suite.
Jan 20 12:04:44.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:04:44.719: INFO: namespace: e2e-tests-dns-xbmhd, resource: bindings, ignored listing per whitelist
Jan 20 12:04:44.779: INFO: namespace e2e-tests-dns-xbmhd deletion completed in 8.356098737s

• [SLOW TEST:28.074 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:04:44.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jan 20 12:04:45.103: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix338864449/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:04:45.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-sqcn5" for this suite.
Jan 20 12:04:51.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:04:51.500: INFO: namespace: e2e-tests-kubectl-sqcn5, resource: bindings, ignored listing per whitelist
Jan 20 12:04:51.570: INFO: namespace e2e-tests-kubectl-sqcn5 deletion completed in 6.198206885s

• [SLOW TEST:6.790 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:04:51.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-cvj45 in namespace e2e-tests-proxy-8j66j
I0120 12:04:51.949194       8 runners.go:184] Created replication controller with name: proxy-service-cvj45, namespace: e2e-tests-proxy-8j66j, replica count: 1
I0120 12:04:53.000250       8 runners.go:184] proxy-service-cvj45 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 12:04:54.000719       8 runners.go:184] proxy-service-cvj45 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 12:04:55.001133       8 runners.go:184] proxy-service-cvj45 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 12:04:56.001824       8 runners.go:184] proxy-service-cvj45 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 12:04:57.002234       8 runners.go:184] proxy-service-cvj45 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 12:04:58.002598       8 runners.go:184] proxy-service-cvj45 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 12:04:59.002957       8 runners.go:184] proxy-service-cvj45 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 12:05:00.003311       8 runners.go:184] proxy-service-cvj45 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 12:05:01.003725       8 runners.go:184] proxy-service-cvj45 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0120 12:05:02.004183       8 runners.go:184] proxy-service-cvj45 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0120 12:05:03.004523       8 runners.go:184] proxy-service-cvj45 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0120 12:05:04.005048       8 runners.go:184] proxy-service-cvj45 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0120 12:05:05.005385       8 runners.go:184] proxy-service-cvj45 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0120 12:05:06.005727       8 runners.go:184] proxy-service-cvj45 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0120 12:05:07.006302       8 runners.go:184] proxy-service-cvj45 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 20 12:05:07.019: INFO: setup took 15.155418364s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 20 12:05:07.066: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-8j66j/services/proxy-service-cvj45:portname1/proxy/: foo (200; 46.534391ms)
Jan 20 12:05:07.068: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-8j66j/pods/proxy-service-cvj45-pq94j:160/proxy/: foo (200; 48.438254ms)
Jan 20 12:05:07.069: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-8j66j/pods/http:proxy-service-cvj45-pq94j:160/proxy/: foo (200; 48.946715ms)
Jan 20 12:05:07.069: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-8j66j/pods/proxy-service-cvj45-pq94j:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-8dht
STEP: Creating a pod to test atomic-volume-subpath
Jan 20 12:05:19.632: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-8dht" in namespace "e2e-tests-subpath-h8dgx" to be "success or failure"
Jan 20 12:05:19.661: INFO: Pod "pod-subpath-test-projected-8dht": Phase="Pending", Reason="", readiness=false. Elapsed: 29.019504ms
Jan 20 12:05:21.882: INFO: Pod "pod-subpath-test-projected-8dht": Phase="Pending", Reason="", readiness=false. Elapsed: 2.250113646s
Jan 20 12:05:23.904: INFO: Pod "pod-subpath-test-projected-8dht": Phase="Pending", Reason="", readiness=false. Elapsed: 4.271956019s
Jan 20 12:05:26.325: INFO: Pod "pod-subpath-test-projected-8dht": Phase="Pending", Reason="", readiness=false. Elapsed: 6.693247786s
Jan 20 12:05:28.578: INFO: Pod "pod-subpath-test-projected-8dht": Phase="Pending", Reason="", readiness=false. Elapsed: 8.946291553s
Jan 20 12:05:30.664: INFO: Pod "pod-subpath-test-projected-8dht": Phase="Pending", Reason="", readiness=false. Elapsed: 11.032591841s
Jan 20 12:05:32.680: INFO: Pod "pod-subpath-test-projected-8dht": Phase="Pending", Reason="", readiness=false. Elapsed: 13.048589273s
Jan 20 12:05:34.696: INFO: Pod "pod-subpath-test-projected-8dht": Phase="Pending", Reason="", readiness=false. Elapsed: 15.064194473s
Jan 20 12:05:36.724: INFO: Pod "pod-subpath-test-projected-8dht": Phase="Running", Reason="", readiness=false. Elapsed: 17.092558013s
Jan 20 12:05:38.740: INFO: Pod "pod-subpath-test-projected-8dht": Phase="Running", Reason="", readiness=false. Elapsed: 19.107973385s
Jan 20 12:05:40.759: INFO: Pod "pod-subpath-test-projected-8dht": Phase="Running", Reason="", readiness=false. Elapsed: 21.127026938s
Jan 20 12:05:42.773: INFO: Pod "pod-subpath-test-projected-8dht": Phase="Running", Reason="", readiness=false. Elapsed: 23.140889832s
Jan 20 12:05:44.791: INFO: Pod "pod-subpath-test-projected-8dht": Phase="Running", Reason="", readiness=false. Elapsed: 25.159093959s
Jan 20 12:05:46.807: INFO: Pod "pod-subpath-test-projected-8dht": Phase="Running", Reason="", readiness=false. Elapsed: 27.175836991s
Jan 20 12:05:48.826: INFO: Pod "pod-subpath-test-projected-8dht": Phase="Running", Reason="", readiness=false. Elapsed: 29.193970476s
Jan 20 12:05:50.862: INFO: Pod "pod-subpath-test-projected-8dht": Phase="Running", Reason="", readiness=false. Elapsed: 31.230100886s
Jan 20 12:05:52.878: INFO: Pod "pod-subpath-test-projected-8dht": Phase="Running", Reason="", readiness=false. Elapsed: 33.246671116s
Jan 20 12:05:54.902: INFO: Pod "pod-subpath-test-projected-8dht": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.270082136s
STEP: Saw pod success
Jan 20 12:05:54.902: INFO: Pod "pod-subpath-test-projected-8dht" satisfied condition "success or failure"
Jan 20 12:05:54.911: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-projected-8dht container test-container-subpath-projected-8dht: 
STEP: delete the pod
Jan 20 12:05:55.815: INFO: Waiting for pod pod-subpath-test-projected-8dht to disappear
Jan 20 12:05:56.108: INFO: Pod pod-subpath-test-projected-8dht no longer exists
STEP: Deleting pod pod-subpath-test-projected-8dht
Jan 20 12:05:56.108: INFO: Deleting pod "pod-subpath-test-projected-8dht" in namespace "e2e-tests-subpath-h8dgx"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:05:56.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-h8dgx" for this suite.
Jan 20 12:06:02.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:06:02.396: INFO: namespace: e2e-tests-subpath-h8dgx, resource: bindings, ignored listing per whitelist
Jan 20 12:06:02.489: INFO: namespace e2e-tests-subpath-h8dgx deletion completed in 6.32814404s

• [SLOW TEST:43.080 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:06:02.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan 20 12:06:02.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-n4r4t'
Jan 20 12:06:02.994: INFO: stderr: ""
Jan 20 12:06:02.994: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan 20 12:06:13.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-n4r4t -o json'
Jan 20 12:06:14.917: INFO: stderr: ""
Jan 20 12:06:14.918: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-20T12:06:02Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"e2e-tests-kubectl-n4r4t\",\n        \"resourceVersion\": \"18849334\",\n        \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-n4r4t/pods/e2e-test-nginx-pod\",\n        \"uid\": \"3aad3c96-3b7d-11ea-a994-fa163e34d433\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-vdc4z\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"hunter-server-hu5at5svl7ps\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-vdc4z\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-vdc4z\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-20T12:06:03Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-20T12:06:10Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-20T12:06:10Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-20T12:06:02Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://05643f6c04844be63252f0c9fbefcb6fbdd9eb24e45e1632bb81e8f102d2886e\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-20T12:06:09Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.240\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.4\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-20T12:06:03Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 20 12:06:14.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-n4r4t'
Jan 20 12:06:15.384: INFO: stderr: ""
Jan 20 12:06:15.384: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568
Jan 20 12:06:15.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-n4r4t'
Jan 20 12:06:22.760: INFO: stderr: ""
Jan 20 12:06:22.760: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:06:22.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-n4r4t" for this suite.
Jan 20 12:06:28.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:06:29.074: INFO: namespace: e2e-tests-kubectl-n4r4t, resource: bindings, ignored listing per whitelist
Jan 20 12:06:29.162: INFO: namespace e2e-tests-kubectl-n4r4t deletion completed in 6.363123836s

• [SLOW TEST:26.673 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:06:29.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 20 12:06:42.656: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:06:43.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-s6vnw" for this suite.
Jan 20 12:07:10.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:07:10.688: INFO: namespace: e2e-tests-replicaset-s6vnw, resource: bindings, ignored listing per whitelist
Jan 20 12:07:10.739: INFO: namespace e2e-tests-replicaset-s6vnw deletion completed in 26.870642871s

• [SLOW TEST:41.576 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:07:10.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-63374e6d-3b7d-11ea-8bde-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 20 12:07:10.991: INFO: Waiting up to 5m0s for pod "pod-configmaps-63386778-3b7d-11ea-8bde-0242ac110005" in namespace "e2e-tests-configmap-j5xmk" to be "success or failure"
Jan 20 12:07:11.005: INFO: Pod "pod-configmaps-63386778-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.342643ms
Jan 20 12:07:13.023: INFO: Pod "pod-configmaps-63386778-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032569859s
Jan 20 12:07:15.039: INFO: Pod "pod-configmaps-63386778-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047804003s
Jan 20 12:07:17.053: INFO: Pod "pod-configmaps-63386778-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061731343s
Jan 20 12:07:19.066: INFO: Pod "pod-configmaps-63386778-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.075252408s
Jan 20 12:07:21.086: INFO: Pod "pod-configmaps-63386778-3b7d-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.094967877s
STEP: Saw pod success
Jan 20 12:07:21.086: INFO: Pod "pod-configmaps-63386778-3b7d-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:07:21.091: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-63386778-3b7d-11ea-8bde-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 20 12:07:21.217: INFO: Waiting for pod pod-configmaps-63386778-3b7d-11ea-8bde-0242ac110005 to disappear
Jan 20 12:07:21.235: INFO: Pod pod-configmaps-63386778-3b7d-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:07:21.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-j5xmk" for this suite.
Jan 20 12:07:27.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:07:27.380: INFO: namespace: e2e-tests-configmap-j5xmk, resource: bindings, ignored listing per whitelist
Jan 20 12:07:27.518: INFO: namespace e2e-tests-configmap-j5xmk deletion completed in 6.271352506s

• [SLOW TEST:16.779 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:07:27.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-6d2c6f49-3b7d-11ea-8bde-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 20 12:07:27.763: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6d2d1383-3b7d-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-nfdlf" to be "success or failure"
Jan 20 12:07:27.778: INFO: Pod "pod-projected-secrets-6d2d1383-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.005509ms
Jan 20 12:07:29.826: INFO: Pod "pod-projected-secrets-6d2d1383-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063395693s
Jan 20 12:07:31.848: INFO: Pod "pod-projected-secrets-6d2d1383-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084909469s
Jan 20 12:07:33.899: INFO: Pod "pod-projected-secrets-6d2d1383-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135510552s
Jan 20 12:07:36.342: INFO: Pod "pod-projected-secrets-6d2d1383-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.579168276s
Jan 20 12:07:38.360: INFO: Pod "pod-projected-secrets-6d2d1383-3b7d-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.59653937s
STEP: Saw pod success
Jan 20 12:07:38.360: INFO: Pod "pod-projected-secrets-6d2d1383-3b7d-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:07:38.372: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-6d2d1383-3b7d-11ea-8bde-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 20 12:07:38.483: INFO: Waiting for pod pod-projected-secrets-6d2d1383-3b7d-11ea-8bde-0242ac110005 to disappear
Jan 20 12:07:38.561: INFO: Pod pod-projected-secrets-6d2d1383-3b7d-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:07:38.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nfdlf" for this suite.
Jan 20 12:07:44.695: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:07:44.735: INFO: namespace: e2e-tests-projected-nfdlf, resource: bindings, ignored listing per whitelist
Jan 20 12:07:44.976: INFO: namespace e2e-tests-projected-nfdlf deletion completed in 6.393389487s

• [SLOW TEST:17.457 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:07:44.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 20 12:07:45.226: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7794d271-3b7d-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-gpqml" to be "success or failure"
Jan 20 12:07:45.290: INFO: Pod "downwardapi-volume-7794d271-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 64.283553ms
Jan 20 12:07:47.308: INFO: Pod "downwardapi-volume-7794d271-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082042443s
Jan 20 12:07:49.337: INFO: Pod "downwardapi-volume-7794d271-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110836146s
Jan 20 12:07:51.869: INFO: Pod "downwardapi-volume-7794d271-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.64312732s
Jan 20 12:07:53.971: INFO: Pod "downwardapi-volume-7794d271-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.744608147s
Jan 20 12:07:55.981: INFO: Pod "downwardapi-volume-7794d271-3b7d-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.755276764s
STEP: Saw pod success
Jan 20 12:07:55.981: INFO: Pod "downwardapi-volume-7794d271-3b7d-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:07:55.988: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-7794d271-3b7d-11ea-8bde-0242ac110005 container client-container: 
STEP: delete the pod
Jan 20 12:07:56.373: INFO: Waiting for pod downwardapi-volume-7794d271-3b7d-11ea-8bde-0242ac110005 to disappear
Jan 20 12:07:56.739: INFO: Pod downwardapi-volume-7794d271-3b7d-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:07:56.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gpqml" for this suite.
Jan 20 12:08:02.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:08:02.972: INFO: namespace: e2e-tests-projected-gpqml, resource: bindings, ignored listing per whitelist
Jan 20 12:08:03.213: INFO: namespace e2e-tests-projected-gpqml deletion completed in 6.420764867s

• [SLOW TEST:18.236 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:08:03.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-827bda3e-3b7d-11ea-8bde-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 20 12:08:03.471: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-827e6b9b-3b7d-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-kwkcn" to be "success or failure"
Jan 20 12:08:03.493: INFO: Pod "pod-projected-secrets-827e6b9b-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 22.403017ms
Jan 20 12:08:05.629: INFO: Pod "pod-projected-secrets-827e6b9b-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157532268s
Jan 20 12:08:07.660: INFO: Pod "pod-projected-secrets-827e6b9b-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189182594s
Jan 20 12:08:09.675: INFO: Pod "pod-projected-secrets-827e6b9b-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.203972102s
Jan 20 12:08:11.834: INFO: Pod "pod-projected-secrets-827e6b9b-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.362796106s
Jan 20 12:08:13.906: INFO: Pod "pod-projected-secrets-827e6b9b-3b7d-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.434935413s
STEP: Saw pod success
Jan 20 12:08:13.906: INFO: Pod "pod-projected-secrets-827e6b9b-3b7d-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:08:13.918: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-secrets-827e6b9b-3b7d-11ea-8bde-0242ac110005 container projected-secret-volume-test: 
STEP: delete the pod
Jan 20 12:08:14.122: INFO: Waiting for pod pod-projected-secrets-827e6b9b-3b7d-11ea-8bde-0242ac110005 to disappear
Jan 20 12:08:14.146: INFO: Pod pod-projected-secrets-827e6b9b-3b7d-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:08:14.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-kwkcn" for this suite.
Jan 20 12:08:20.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:08:20.443: INFO: namespace: e2e-tests-projected-kwkcn, resource: bindings, ignored listing per whitelist
Jan 20 12:08:20.491: INFO: namespace e2e-tests-projected-kwkcn deletion completed in 6.327858343s

• [SLOW TEST:17.278 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:08:20.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 20 12:08:20.706: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 20 12:08:20.716: INFO: Waiting for terminating namespaces to be deleted...
Jan 20 12:08:20.720: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 20 12:08:20.732: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 20 12:08:20.732: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 20 12:08:20.732: INFO: 	Container coredns ready: true, restart count 0
Jan 20 12:08:20.732: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 20 12:08:20.732: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 20 12:08:20.732: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 20 12:08:20.732: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 20 12:08:20.732: INFO: 	Container weave ready: true, restart count 0
Jan 20 12:08:20.732: INFO: 	Container weave-npc ready: true, restart count 0
Jan 20 12:08:20.732: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 20 12:08:20.732: INFO: 	Container coredns ready: true, restart count 0
Jan 20 12:08:20.732: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 20 12:08:20.732: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-server-hu5at5svl7ps
Jan 20 12:08:21.018: INFO: Pod coredns-54ff9cd656-79kxx requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 20 12:08:21.018: INFO: Pod coredns-54ff9cd656-bmkk4 requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 20 12:08:21.018: INFO: Pod etcd-hunter-server-hu5at5svl7ps requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan 20 12:08:21.018: INFO: Pod kube-apiserver-hunter-server-hu5at5svl7ps requesting resource cpu=250m on Node hunter-server-hu5at5svl7ps
Jan 20 12:08:21.018: INFO: Pod kube-controller-manager-hunter-server-hu5at5svl7ps requesting resource cpu=200m on Node hunter-server-hu5at5svl7ps
Jan 20 12:08:21.018: INFO: Pod kube-proxy-bqnnz requesting resource cpu=0m on Node hunter-server-hu5at5svl7ps
Jan 20 12:08:21.018: INFO: Pod kube-scheduler-hunter-server-hu5at5svl7ps requesting resource cpu=100m on Node hunter-server-hu5at5svl7ps
Jan 20 12:08:21.018: INFO: Pod weave-net-tqwf2 requesting resource cpu=20m on Node hunter-server-hu5at5svl7ps
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8cf74625-3b7d-11ea-8bde-0242ac110005.15eb96fb74a6c451], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-bgcc7/filler-pod-8cf74625-3b7d-11ea-8bde-0242ac110005 to hunter-server-hu5at5svl7ps]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8cf74625-3b7d-11ea-8bde-0242ac110005.15eb96fca3f0fcce], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8cf74625-3b7d-11ea-8bde-0242ac110005.15eb96fd0d399038], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-8cf74625-3b7d-11ea-8bde-0242ac110005.15eb96fd336bd16f], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15eb96fd5c5a1444], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 Insufficient cpu.]
STEP: removing the label node off the node hunter-server-hu5at5svl7ps
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:08:30.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-bgcc7" for this suite.
Jan 20 12:08:36.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:08:36.733: INFO: namespace: e2e-tests-sched-pred-bgcc7, resource: bindings, ignored listing per whitelist
Jan 20 12:08:36.742: INFO: namespace e2e-tests-sched-pred-bgcc7 deletion completed in 6.316053627s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:16.251 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:08:36.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-974f4a75-3b7d-11ea-8bde-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 20 12:08:38.762: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9757fd85-3b7d-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-vtfl6" to be "success or failure"
Jan 20 12:08:38.803: INFO: Pod "pod-projected-configmaps-9757fd85-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 40.952066ms
Jan 20 12:08:40.870: INFO: Pod "pod-projected-configmaps-9757fd85-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107589523s
Jan 20 12:08:42.898: INFO: Pod "pod-projected-configmaps-9757fd85-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135906972s
Jan 20 12:08:44.972: INFO: Pod "pod-projected-configmaps-9757fd85-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.210305711s
Jan 20 12:08:46.989: INFO: Pod "pod-projected-configmaps-9757fd85-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.227169987s
Jan 20 12:08:49.293: INFO: Pod "pod-projected-configmaps-9757fd85-3b7d-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.530982872s
STEP: Saw pod success
Jan 20 12:08:49.293: INFO: Pod "pod-projected-configmaps-9757fd85-3b7d-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:08:49.310: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-9757fd85-3b7d-11ea-8bde-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 20 12:08:49.736: INFO: Waiting for pod pod-projected-configmaps-9757fd85-3b7d-11ea-8bde-0242ac110005 to disappear
Jan 20 12:08:49.745: INFO: Pod pod-projected-configmaps-9757fd85-3b7d-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:08:49.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vtfl6" for this suite.
Jan 20 12:08:55.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:08:55.906: INFO: namespace: e2e-tests-projected-vtfl6, resource: bindings, ignored listing per whitelist
Jan 20 12:08:56.063: INFO: namespace e2e-tests-projected-vtfl6 deletion completed in 6.255970631s

• [SLOW TEST:19.321 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:08:56.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jan 20 12:08:56.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-nmmqc'
Jan 20 12:08:56.709: INFO: stderr: ""
Jan 20 12:08:56.709: INFO: stdout: "pod/pause created\n"
Jan 20 12:08:56.709: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 20 12:08:56.709: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-nmmqc" to be "running and ready"
Jan 20 12:08:56.729: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 19.682114ms
Jan 20 12:08:58.740: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03085897s
Jan 20 12:09:01.349: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.639910802s
Jan 20 12:09:03.377: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.667766784s
Jan 20 12:09:05.388: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.679489907s
Jan 20 12:09:05.389: INFO: Pod "pause" satisfied condition "running and ready"
Jan 20 12:09:05.389: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 20 12:09:05.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-nmmqc'
Jan 20 12:09:05.617: INFO: stderr: ""
Jan 20 12:09:05.617: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 20 12:09:05.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-nmmqc'
Jan 20 12:09:05.742: INFO: stderr: ""
Jan 20 12:09:05.742: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 20 12:09:05.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-nmmqc'
Jan 20 12:09:06.028: INFO: stderr: ""
Jan 20 12:09:06.028: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 20 12:09:06.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-nmmqc'
Jan 20 12:09:06.175: INFO: stderr: ""
Jan 20 12:09:06.175: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          10s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jan 20 12:09:06.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-nmmqc'
Jan 20 12:09:06.435: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 20 12:09:06.435: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 20 12:09:06.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-nmmqc'
Jan 20 12:09:06.626: INFO: stderr: "No resources found.\n"
Jan 20 12:09:06.626: INFO: stdout: ""
Jan 20 12:09:06.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-nmmqc -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 20 12:09:06.764: INFO: stderr: ""
Jan 20 12:09:06.764: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:09:06.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-nmmqc" for this suite.
Jan 20 12:09:12.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:09:12.949: INFO: namespace: e2e-tests-kubectl-nmmqc, resource: bindings, ignored listing per whitelist
Jan 20 12:09:12.996: INFO: namespace e2e-tests-kubectl-nmmqc deletion completed in 6.221575341s

• [SLOW TEST:16.932 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:09:12.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-ac11e87f-3b7d-11ea-8bde-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 20 12:09:13.224: INFO: Waiting up to 5m0s for pod "pod-configmaps-ac12dd73-3b7d-11ea-8bde-0242ac110005" in namespace "e2e-tests-configmap-vh5zq" to be "success or failure"
Jan 20 12:09:13.311: INFO: Pod "pod-configmaps-ac12dd73-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 86.147026ms
Jan 20 12:09:15.321: INFO: Pod "pod-configmaps-ac12dd73-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096775111s
Jan 20 12:09:17.338: INFO: Pod "pod-configmaps-ac12dd73-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113198223s
Jan 20 12:09:19.347: INFO: Pod "pod-configmaps-ac12dd73-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122789695s
Jan 20 12:09:21.386: INFO: Pod "pod-configmaps-ac12dd73-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.161632648s
Jan 20 12:09:23.402: INFO: Pod "pod-configmaps-ac12dd73-3b7d-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.178081522s
STEP: Saw pod success
Jan 20 12:09:23.403: INFO: Pod "pod-configmaps-ac12dd73-3b7d-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:09:23.409: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-ac12dd73-3b7d-11ea-8bde-0242ac110005 container configmap-volume-test: 
STEP: delete the pod
Jan 20 12:09:23.606: INFO: Waiting for pod pod-configmaps-ac12dd73-3b7d-11ea-8bde-0242ac110005 to disappear
Jan 20 12:09:23.629: INFO: Pod pod-configmaps-ac12dd73-3b7d-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:09:23.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-vh5zq" for this suite.
Jan 20 12:09:29.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:09:29.979: INFO: namespace: e2e-tests-configmap-vh5zq, resource: bindings, ignored listing per whitelist
Jan 20 12:09:29.999: INFO: namespace e2e-tests-configmap-vh5zq deletion completed in 6.351074205s

• [SLOW TEST:17.003 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:09:29.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 20 12:09:30.216: INFO: Waiting up to 5m0s for pod "pod-b6345490-3b7d-11ea-8bde-0242ac110005" in namespace "e2e-tests-emptydir-nfrn9" to be "success or failure"
Jan 20 12:09:30.230: INFO: Pod "pod-b6345490-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.762524ms
Jan 20 12:09:32.240: INFO: Pod "pod-b6345490-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023877604s
Jan 20 12:09:34.259: INFO: Pod "pod-b6345490-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043181386s
Jan 20 12:09:36.306: INFO: Pod "pod-b6345490-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090279801s
Jan 20 12:09:38.345: INFO: Pod "pod-b6345490-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128878975s
Jan 20 12:09:40.355: INFO: Pod "pod-b6345490-3b7d-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.138531047s
STEP: Saw pod success
Jan 20 12:09:40.355: INFO: Pod "pod-b6345490-3b7d-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:09:40.361: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-b6345490-3b7d-11ea-8bde-0242ac110005 container test-container: 
STEP: delete the pod
Jan 20 12:09:40.774: INFO: Waiting for pod pod-b6345490-3b7d-11ea-8bde-0242ac110005 to disappear
Jan 20 12:09:41.074: INFO: Pod pod-b6345490-3b7d-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:09:41.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-nfrn9" for this suite.
Jan 20 12:09:47.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:09:47.309: INFO: namespace: e2e-tests-emptydir-nfrn9, resource: bindings, ignored listing per whitelist
Jan 20 12:09:47.406: INFO: namespace e2e-tests-emptydir-nfrn9 deletion completed in 6.318028619s

• [SLOW TEST:17.407 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:09:47.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 20 12:09:47.778: INFO: Waiting up to 5m0s for pod "pod-c0a89e90-3b7d-11ea-8bde-0242ac110005" in namespace "e2e-tests-emptydir-cw9pv" to be "success or failure"
Jan 20 12:09:47.932: INFO: Pod "pod-c0a89e90-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 154.355677ms
Jan 20 12:09:49.966: INFO: Pod "pod-c0a89e90-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188228152s
Jan 20 12:09:51.982: INFO: Pod "pod-c0a89e90-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204652747s
Jan 20 12:09:54.062: INFO: Pod "pod-c0a89e90-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.284002267s
Jan 20 12:09:56.076: INFO: Pod "pod-c0a89e90-3b7d-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.298252988s
Jan 20 12:09:58.091: INFO: Pod "pod-c0a89e90-3b7d-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.313276074s
STEP: Saw pod success
Jan 20 12:09:58.091: INFO: Pod "pod-c0a89e90-3b7d-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:09:58.097: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-c0a89e90-3b7d-11ea-8bde-0242ac110005 container test-container: 
STEP: delete the pod
Jan 20 12:09:58.228: INFO: Waiting for pod pod-c0a89e90-3b7d-11ea-8bde-0242ac110005 to disappear
Jan 20 12:09:58.278: INFO: Pod pod-c0a89e90-3b7d-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:09:58.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-cw9pv" for this suite.
Jan 20 12:10:04.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:10:04.396: INFO: namespace: e2e-tests-emptydir-cw9pv, resource: bindings, ignored listing per whitelist
Jan 20 12:10:04.463: INFO: namespace e2e-tests-emptydir-cw9pv deletion completed in 6.172010105s

• [SLOW TEST:17.057 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:10:04.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jan 20 12:10:04.672: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jan 20 12:10:04.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kg846'
Jan 20 12:10:05.219: INFO: stderr: ""
Jan 20 12:10:05.219: INFO: stdout: "service/redis-slave created\n"
Jan 20 12:10:05.221: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jan 20 12:10:05.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kg846'
Jan 20 12:10:05.779: INFO: stderr: ""
Jan 20 12:10:05.779: INFO: stdout: "service/redis-master created\n"
Jan 20 12:10:05.780: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 20 12:10:05.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kg846'
Jan 20 12:10:06.351: INFO: stderr: ""
Jan 20 12:10:06.351: INFO: stdout: "service/frontend created\n"
Jan 20 12:10:06.352: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jan 20 12:10:06.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kg846'
Jan 20 12:10:06.896: INFO: stderr: ""
Jan 20 12:10:06.896: INFO: stdout: "deployment.extensions/frontend created\n"
Jan 20 12:10:06.896: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 20 12:10:06.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kg846'
Jan 20 12:10:07.519: INFO: stderr: ""
Jan 20 12:10:07.519: INFO: stdout: "deployment.extensions/redis-master created\n"
Jan 20 12:10:07.520: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jan 20 12:10:07.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-kg846'
Jan 20 12:10:08.191: INFO: stderr: ""
Jan 20 12:10:08.191: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jan 20 12:10:08.191: INFO: Waiting for all frontend pods to be Running.
Jan 20 12:10:38.244: INFO: Waiting for frontend to serve content.
Jan 20 12:10:39.666: INFO: Trying to add a new entry to the guestbook.
Jan 20 12:10:39.738: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jan 20 12:10:39.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kg846'
Jan 20 12:10:40.278: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 20 12:10:40.278: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 20 12:10:40.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kg846'
Jan 20 12:10:40.656: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 20 12:10:40.656: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 20 12:10:40.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kg846'
Jan 20 12:10:40.838: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 20 12:10:40.838: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 20 12:10:40.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kg846'
Jan 20 12:10:41.002: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 20 12:10:41.002: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 20 12:10:41.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kg846'
Jan 20 12:10:41.253: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 20 12:10:41.254: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 20 12:10:41.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-kg846'
Jan 20 12:10:41.685: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 20 12:10:41.685: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:10:41.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-kg846" for this suite.
Jan 20 12:11:26.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:11:26.086: INFO: namespace: e2e-tests-kubectl-kg846, resource: bindings, ignored listing per whitelist
Jan 20 12:11:26.221: INFO: namespace e2e-tests-kubectl-kg846 deletion completed in 44.482239567s

• [SLOW TEST:81.758 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:11:26.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-2r5b8
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-2r5b8
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-2r5b8
Jan 20 12:11:26.595: INFO: Found 0 stateful pods, waiting for 1
Jan 20 12:11:36.612: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan 20 12:11:46.607: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 20 12:11:46.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2r5b8 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 20 12:11:47.222: INFO: stderr: "I0120 12:11:46.883135    2620 log.go:172] (0xc00082c210) (0xc00059f180) Create stream\nI0120 12:11:46.883661    2620 log.go:172] (0xc00082c210) (0xc00059f180) Stream added, broadcasting: 1\nI0120 12:11:46.894816    2620 log.go:172] (0xc00082c210) Reply frame received for 1\nI0120 12:11:46.894927    2620 log.go:172] (0xc00082c210) (0xc0005da000) Create stream\nI0120 12:11:46.894954    2620 log.go:172] (0xc00082c210) (0xc0005da000) Stream added, broadcasting: 3\nI0120 12:11:46.896564    2620 log.go:172] (0xc00082c210) Reply frame received for 3\nI0120 12:11:46.896603    2620 log.go:172] (0xc00082c210) (0xc00059f220) Create stream\nI0120 12:11:46.896616    2620 log.go:172] (0xc00082c210) (0xc00059f220) Stream added, broadcasting: 5\nI0120 12:11:46.898032    2620 log.go:172] (0xc00082c210) Reply frame received for 5\nI0120 12:11:47.060857    2620 log.go:172] (0xc00082c210) Data frame received for 3\nI0120 12:11:47.060952    2620 log.go:172] (0xc0005da000) (3) Data frame handling\nI0120 12:11:47.060982    2620 log.go:172] (0xc0005da000) (3) Data frame sent\nI0120 12:11:47.209594    2620 log.go:172] (0xc00082c210) Data frame received for 1\nI0120 12:11:47.209830    2620 log.go:172] (0xc00082c210) (0xc00059f220) Stream removed, broadcasting: 5\nI0120 12:11:47.209919    2620 log.go:172] (0xc00059f180) (1) Data frame handling\nI0120 12:11:47.209965    2620 log.go:172] (0xc00059f180) (1) Data frame sent\nI0120 12:11:47.210034    2620 log.go:172] (0xc00082c210) (0xc0005da000) Stream removed, broadcasting: 3\nI0120 12:11:47.210176    2620 log.go:172] (0xc00082c210) (0xc00059f180) Stream removed, broadcasting: 1\nI0120 12:11:47.210204    2620 log.go:172] (0xc00082c210) Go away received\nI0120 12:11:47.211233    2620 log.go:172] (0xc00082c210) (0xc00059f180) Stream removed, broadcasting: 1\nI0120 12:11:47.211273    2620 log.go:172] (0xc00082c210) (0xc0005da000) Stream removed, broadcasting: 3\nI0120 12:11:47.211286    2620 log.go:172] (0xc00082c210) (0xc00059f220) Stream removed, broadcasting: 5\n"
Jan 20 12:11:47.222: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 20 12:11:47.222: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 20 12:11:47.242: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 20 12:11:57.277: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 20 12:11:57.277: INFO: Waiting for statefulset status.replicas updated to 0
Jan 20 12:11:57.430: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999695s
Jan 20 12:11:58.447: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.871501218s
Jan 20 12:11:59.466: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.854482204s
Jan 20 12:12:00.515: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.83455575s
Jan 20 12:12:01.529: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.785533868s
Jan 20 12:12:02.573: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.772106132s
Jan 20 12:12:03.594: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.728143846s
Jan 20 12:12:04.630: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.707221329s
Jan 20 12:12:05.653: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.671534986s
Jan 20 12:12:06.683: INFO: Verifying statefulset ss doesn't scale past 1 for another 647.617995ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-2r5b8
Jan 20 12:12:07.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2r5b8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 12:12:08.304: INFO: stderr: "I0120 12:12:07.969588    2643 log.go:172] (0xc00014c6e0) (0xc00074a640) Create stream\nI0120 12:12:07.969947    2643 log.go:172] (0xc00014c6e0) (0xc00074a640) Stream added, broadcasting: 1\nI0120 12:12:07.978728    2643 log.go:172] (0xc00014c6e0) Reply frame received for 1\nI0120 12:12:07.978762    2643 log.go:172] (0xc00014c6e0) (0xc00074a6e0) Create stream\nI0120 12:12:07.978774    2643 log.go:172] (0xc00014c6e0) (0xc00074a6e0) Stream added, broadcasting: 3\nI0120 12:12:07.981661    2643 log.go:172] (0xc00014c6e0) Reply frame received for 3\nI0120 12:12:07.981685    2643 log.go:172] (0xc00014c6e0) (0xc00064cd20) Create stream\nI0120 12:12:07.981712    2643 log.go:172] (0xc00014c6e0) (0xc00064cd20) Stream added, broadcasting: 5\nI0120 12:12:07.983142    2643 log.go:172] (0xc00014c6e0) Reply frame received for 5\nI0120 12:12:08.124807    2643 log.go:172] (0xc00014c6e0) Data frame received for 3\nI0120 12:12:08.124958    2643 log.go:172] (0xc00074a6e0) (3) Data frame handling\nI0120 12:12:08.125012    2643 log.go:172] (0xc00074a6e0) (3) Data frame sent\nI0120 12:12:08.291609    2643 log.go:172] (0xc00014c6e0) Data frame received for 1\nI0120 12:12:08.291952    2643 log.go:172] (0xc00014c6e0) (0xc00064cd20) Stream removed, broadcasting: 5\nI0120 12:12:08.292144    2643 log.go:172] (0xc00074a640) (1) Data frame handling\nI0120 12:12:08.292201    2643 log.go:172] (0xc00074a640) (1) Data frame sent\nI0120 12:12:08.292344    2643 log.go:172] (0xc00014c6e0) (0xc00074a6e0) Stream removed, broadcasting: 3\nI0120 12:12:08.292528    2643 log.go:172] (0xc00014c6e0) (0xc00074a640) Stream removed, broadcasting: 1\nI0120 12:12:08.292615    2643 log.go:172] (0xc00014c6e0) Go away received\nI0120 12:12:08.293254    2643 log.go:172] (0xc00014c6e0) (0xc00074a640) Stream removed, broadcasting: 1\nI0120 12:12:08.293275    2643 log.go:172] (0xc00014c6e0) (0xc00074a6e0) Stream removed, broadcasting: 3\nI0120 12:12:08.293290    2643 log.go:172] (0xc00014c6e0) (0xc00064cd20) Stream removed, broadcasting: 5\n"
Jan 20 12:12:08.304: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 20 12:12:08.304: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 20 12:12:08.331: INFO: Found 1 stateful pods, waiting for 3
Jan 20 12:12:18.343: INFO: Found 2 stateful pods, waiting for 3
Jan 20 12:12:28.349: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 12:12:28.349: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 12:12:28.349: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 20 12:12:28.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2r5b8 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 20 12:12:29.157: INFO: stderr: "I0120 12:12:28.649313    2665 log.go:172] (0xc0006f00b0) (0xc000710640) Create stream\nI0120 12:12:28.649609    2665 log.go:172] (0xc0006f00b0) (0xc000710640) Stream added, broadcasting: 1\nI0120 12:12:28.658884    2665 log.go:172] (0xc0006f00b0) Reply frame received for 1\nI0120 12:12:28.659074    2665 log.go:172] (0xc0006f00b0) (0xc000666c80) Create stream\nI0120 12:12:28.659098    2665 log.go:172] (0xc0006f00b0) (0xc000666c80) Stream added, broadcasting: 3\nI0120 12:12:28.660900    2665 log.go:172] (0xc0006f00b0) Reply frame received for 3\nI0120 12:12:28.660974    2665 log.go:172] (0xc0006f00b0) (0xc0002e4000) Create stream\nI0120 12:12:28.660991    2665 log.go:172] (0xc0006f00b0) (0xc0002e4000) Stream added, broadcasting: 5\nI0120 12:12:28.662621    2665 log.go:172] (0xc0006f00b0) Reply frame received for 5\nI0120 12:12:28.969766    2665 log.go:172] (0xc0006f00b0) Data frame received for 3\nI0120 12:12:28.970058    2665 log.go:172] (0xc000666c80) (3) Data frame handling\nI0120 12:12:28.970111    2665 log.go:172] (0xc000666c80) (3) Data frame sent\nI0120 12:12:29.145387    2665 log.go:172] (0xc0006f00b0) Data frame received for 1\nI0120 12:12:29.145558    2665 log.go:172] (0xc0006f00b0) (0xc000666c80) Stream removed, broadcasting: 3\nI0120 12:12:29.145599    2665 log.go:172] (0xc000710640) (1) Data frame handling\nI0120 12:12:29.145616    2665 log.go:172] (0xc0006f00b0) (0xc0002e4000) Stream removed, broadcasting: 5\nI0120 12:12:29.145643    2665 log.go:172] (0xc000710640) (1) Data frame sent\nI0120 12:12:29.145656    2665 log.go:172] (0xc0006f00b0) (0xc000710640) Stream removed, broadcasting: 1\nI0120 12:12:29.145670    2665 log.go:172] (0xc0006f00b0) Go away received\nI0120 12:12:29.146410    2665 log.go:172] (0xc0006f00b0) (0xc000710640) Stream removed, broadcasting: 1\nI0120 12:12:29.146419    2665 log.go:172] (0xc0006f00b0) (0xc000666c80) Stream removed, broadcasting: 3\nI0120 12:12:29.146423    2665 log.go:172] (0xc0006f00b0) (0xc0002e4000) Stream removed, broadcasting: 5\n"
Jan 20 12:12:29.157: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 20 12:12:29.157: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 20 12:12:29.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2r5b8 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 20 12:12:29.586: INFO: stderr: "I0120 12:12:29.321914    2687 log.go:172] (0xc0007142c0) (0xc000734640) Create stream\nI0120 12:12:29.322176    2687 log.go:172] (0xc0007142c0) (0xc000734640) Stream added, broadcasting: 1\nI0120 12:12:29.326329    2687 log.go:172] (0xc0007142c0) Reply frame received for 1\nI0120 12:12:29.326353    2687 log.go:172] (0xc0007142c0) (0xc000674c80) Create stream\nI0120 12:12:29.326359    2687 log.go:172] (0xc0007142c0) (0xc000674c80) Stream added, broadcasting: 3\nI0120 12:12:29.327176    2687 log.go:172] (0xc0007142c0) Reply frame received for 3\nI0120 12:12:29.327199    2687 log.go:172] (0xc0007142c0) (0xc000278000) Create stream\nI0120 12:12:29.327207    2687 log.go:172] (0xc0007142c0) (0xc000278000) Stream added, broadcasting: 5\nI0120 12:12:29.328026    2687 log.go:172] (0xc0007142c0) Reply frame received for 5\nI0120 12:12:29.458952    2687 log.go:172] (0xc0007142c0) Data frame received for 3\nI0120 12:12:29.459041    2687 log.go:172] (0xc000674c80) (3) Data frame handling\nI0120 12:12:29.459063    2687 log.go:172] (0xc000674c80) (3) Data frame sent\nI0120 12:12:29.570779    2687 log.go:172] (0xc0007142c0) Data frame received for 1\nI0120 12:12:29.571165    2687 log.go:172] (0xc0007142c0) (0xc000674c80) Stream removed, broadcasting: 3\nI0120 12:12:29.571378    2687 log.go:172] (0xc000734640) (1) Data frame handling\nI0120 12:12:29.571472    2687 log.go:172] (0xc000734640) (1) Data frame sent\nI0120 12:12:29.571721    2687 log.go:172] (0xc0007142c0) (0xc000278000) Stream removed, broadcasting: 5\nI0120 12:12:29.572072    2687 log.go:172] (0xc0007142c0) (0xc000734640) Stream removed, broadcasting: 1\nI0120 12:12:29.572123    2687 log.go:172] (0xc0007142c0) Go away received\nI0120 12:12:29.573177    2687 log.go:172] (0xc0007142c0) (0xc000734640) Stream removed, broadcasting: 1\nI0120 12:12:29.573199    2687 log.go:172] (0xc0007142c0) (0xc000674c80) Stream removed, broadcasting: 3\nI0120 12:12:29.573213    2687 log.go:172] (0xc0007142c0) (0xc000278000) Stream removed, broadcasting: 5\n"
Jan 20 12:12:29.586: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 20 12:12:29.586: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 20 12:12:29.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2r5b8 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 20 12:12:30.164: INFO: stderr: "I0120 12:12:29.869328    2710 log.go:172] (0xc0001386e0) (0xc000756640) Create stream\nI0120 12:12:29.869480    2710 log.go:172] (0xc0001386e0) (0xc000756640) Stream added, broadcasting: 1\nI0120 12:12:29.873781    2710 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0120 12:12:29.873823    2710 log.go:172] (0xc0001386e0) (0xc000694dc0) Create stream\nI0120 12:12:29.873836    2710 log.go:172] (0xc0001386e0) (0xc000694dc0) Stream added, broadcasting: 3\nI0120 12:12:29.874690    2710 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0120 12:12:29.874709    2710 log.go:172] (0xc0001386e0) (0xc0007566e0) Create stream\nI0120 12:12:29.874716    2710 log.go:172] (0xc0001386e0) (0xc0007566e0) Stream added, broadcasting: 5\nI0120 12:12:29.875490    2710 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0120 12:12:30.061323    2710 log.go:172] (0xc0001386e0) Data frame received for 3\nI0120 12:12:30.061407    2710 log.go:172] (0xc000694dc0) (3) Data frame handling\nI0120 12:12:30.061426    2710 log.go:172] (0xc000694dc0) (3) Data frame sent\nI0120 12:12:30.155487    2710 log.go:172] (0xc0001386e0) Data frame received for 1\nI0120 12:12:30.155598    2710 log.go:172] (0xc0001386e0) (0xc000694dc0) Stream removed, broadcasting: 3\nI0120 12:12:30.155637    2710 log.go:172] (0xc000756640) (1) Data frame handling\nI0120 12:12:30.155654    2710 log.go:172] (0xc000756640) (1) Data frame sent\nI0120 12:12:30.155829    2710 log.go:172] (0xc0001386e0) (0xc0007566e0) Stream removed, broadcasting: 5\nI0120 12:12:30.155873    2710 log.go:172] (0xc0001386e0) (0xc000756640) Stream removed, broadcasting: 1\nI0120 12:12:30.155884    2710 log.go:172] (0xc0001386e0) Go away received\nI0120 12:12:30.156532    2710 log.go:172] (0xc0001386e0) (0xc000756640) Stream removed, broadcasting: 1\nI0120 12:12:30.156540    2710 log.go:172] (0xc0001386e0) (0xc000694dc0) Stream removed, broadcasting: 3\nI0120 12:12:30.156543    2710 log.go:172] (0xc0001386e0) (0xc0007566e0) Stream removed, broadcasting: 5\n"
Jan 20 12:12:30.164: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 20 12:12:30.164: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 20 12:12:30.164: INFO: Waiting for statefulset status.replicas updated to 0
Jan 20 12:12:30.177: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 20 12:12:40.204: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 20 12:12:40.204: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 20 12:12:40.204: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 20 12:12:40.248: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999962s
Jan 20 12:12:41.271: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.975741895s
Jan 20 12:12:42.299: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.952652186s
Jan 20 12:12:43.327: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.925372118s
Jan 20 12:12:44.349: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.896491737s
Jan 20 12:12:45.398: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.874864738s
Jan 20 12:12:46.417: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.82619443s
Jan 20 12:12:47.455: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.806873763s
Jan 20 12:12:48.488: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.768667259s
Jan 20 12:12:49.506: INFO: Verifying statefulset ss doesn't scale past 3 for another 735.633479ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-2r5b8
Jan 20 12:12:50.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2r5b8 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 12:12:51.327: INFO: stderr: "I0120 12:12:50.884145    2733 log.go:172] (0xc0001386e0) (0xc0005b7400) Create stream\nI0120 12:12:50.884842    2733 log.go:172] (0xc0001386e0) (0xc0005b7400) Stream added, broadcasting: 1\nI0120 12:12:50.900697    2733 log.go:172] (0xc0001386e0) Reply frame received for 1\nI0120 12:12:50.900906    2733 log.go:172] (0xc0001386e0) (0xc0005b74a0) Create stream\nI0120 12:12:50.900944    2733 log.go:172] (0xc0001386e0) (0xc0005b74a0) Stream added, broadcasting: 3\nI0120 12:12:50.904735    2733 log.go:172] (0xc0001386e0) Reply frame received for 3\nI0120 12:12:50.904782    2733 log.go:172] (0xc0001386e0) (0xc00038e000) Create stream\nI0120 12:12:50.904793    2733 log.go:172] (0xc0001386e0) (0xc00038e000) Stream added, broadcasting: 5\nI0120 12:12:50.909387    2733 log.go:172] (0xc0001386e0) Reply frame received for 5\nI0120 12:12:51.141929    2733 log.go:172] (0xc0001386e0) Data frame received for 3\nI0120 12:12:51.142365    2733 log.go:172] (0xc0005b74a0) (3) Data frame handling\nI0120 12:12:51.142399    2733 log.go:172] (0xc0005b74a0) (3) Data frame sent\nI0120 12:12:51.311076    2733 log.go:172] (0xc0001386e0) (0xc0005b74a0) Stream removed, broadcasting: 3\nI0120 12:12:51.311326    2733 log.go:172] (0xc0001386e0) Data frame received for 1\nI0120 12:12:51.311350    2733 log.go:172] (0xc0005b7400) (1) Data frame handling\nI0120 12:12:51.311380    2733 log.go:172] (0xc0005b7400) (1) Data frame sent\nI0120 12:12:51.311433    2733 log.go:172] (0xc0001386e0) (0xc0005b7400) Stream removed, broadcasting: 1\nI0120 12:12:51.311995    2733 log.go:172] (0xc0001386e0) (0xc00038e000) Stream removed, broadcasting: 5\nI0120 12:12:51.312035    2733 log.go:172] (0xc0001386e0) Go away received\nI0120 12:12:51.312514    2733 log.go:172] (0xc0001386e0) (0xc0005b7400) Stream removed, broadcasting: 1\nI0120 12:12:51.312527    2733 log.go:172] (0xc0001386e0) (0xc0005b74a0) Stream removed, broadcasting: 3\nI0120 12:12:51.312533    2733 log.go:172] (0xc0001386e0) (0xc00038e000) Stream removed, broadcasting: 5\n"
Jan 20 12:12:51.327: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 20 12:12:51.327: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 20 12:12:51.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2r5b8 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 12:12:52.109: INFO: stderr: "I0120 12:12:51.555300    2754 log.go:172] (0xc0008322c0) (0xc000710640) Create stream\nI0120 12:12:51.555732    2754 log.go:172] (0xc0008322c0) (0xc000710640) Stream added, broadcasting: 1\nI0120 12:12:51.563733    2754 log.go:172] (0xc0008322c0) Reply frame received for 1\nI0120 12:12:51.563800    2754 log.go:172] (0xc0008322c0) (0xc0007106e0) Create stream\nI0120 12:12:51.563813    2754 log.go:172] (0xc0008322c0) (0xc0007106e0) Stream added, broadcasting: 3\nI0120 12:12:51.565065    2754 log.go:172] (0xc0008322c0) Reply frame received for 3\nI0120 12:12:51.565114    2754 log.go:172] (0xc0008322c0) (0xc000648c80) Create stream\nI0120 12:12:51.565154    2754 log.go:172] (0xc0008322c0) (0xc000648c80) Stream added, broadcasting: 5\nI0120 12:12:51.566403    2754 log.go:172] (0xc0008322c0) Reply frame received for 5\nI0120 12:12:51.738186    2754 log.go:172] (0xc0008322c0) Data frame received for 3\nI0120 12:12:51.738869    2754 log.go:172] (0xc0007106e0) (3) Data frame handling\nI0120 12:12:51.739060    2754 log.go:172] (0xc0007106e0) (3) Data frame sent\nI0120 12:12:52.092918    2754 log.go:172] (0xc0008322c0) (0xc000648c80) Stream removed, broadcasting: 5\nI0120 12:12:52.093206    2754 log.go:172] (0xc0008322c0) Data frame received for 1\nI0120 12:12:52.093276    2754 log.go:172] (0xc0008322c0) (0xc0007106e0) Stream removed, broadcasting: 3\nI0120 12:12:52.093323    2754 log.go:172] (0xc000710640) (1) Data frame handling\nI0120 12:12:52.093347    2754 log.go:172] (0xc000710640) (1) Data frame sent\nI0120 12:12:52.093357    2754 log.go:172] (0xc0008322c0) (0xc000710640) Stream removed, broadcasting: 1\nI0120 12:12:52.093378    2754 log.go:172] (0xc0008322c0) Go away received\nI0120 12:12:52.094978    2754 log.go:172] (0xc0008322c0) (0xc000710640) Stream removed, broadcasting: 1\nI0120 12:12:52.095058    2754 log.go:172] (0xc0008322c0) (0xc0007106e0) Stream removed, broadcasting: 3\nI0120 12:12:52.095065    2754 log.go:172] (0xc0008322c0) (0xc000648c80) Stream removed, broadcasting: 5\n"
Jan 20 12:12:52.110: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 20 12:12:52.110: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 20 12:12:52.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2r5b8 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 12:12:52.902: INFO: stderr: "I0120 12:12:52.356209    2775 log.go:172] (0xc000152630) (0xc0000fe640) Create stream\nI0120 12:12:52.356466    2775 log.go:172] (0xc000152630) (0xc0000fe640) Stream added, broadcasting: 1\nI0120 12:12:52.364915    2775 log.go:172] (0xc000152630) Reply frame received for 1\nI0120 12:12:52.365095    2775 log.go:172] (0xc000152630) (0xc0001c2dc0) Create stream\nI0120 12:12:52.365112    2775 log.go:172] (0xc000152630) (0xc0001c2dc0) Stream added, broadcasting: 3\nI0120 12:12:52.367525    2775 log.go:172] (0xc000152630) Reply frame received for 3\nI0120 12:12:52.367566    2775 log.go:172] (0xc000152630) (0xc0001bc000) Create stream\nI0120 12:12:52.367579    2775 log.go:172] (0xc000152630) (0xc0001bc000) Stream added, broadcasting: 5\nI0120 12:12:52.368851    2775 log.go:172] (0xc000152630) Reply frame received for 5\nI0120 12:12:52.612402    2775 log.go:172] (0xc000152630) Data frame received for 3\nI0120 12:12:52.612472    2775 log.go:172] (0xc0001c2dc0) (3) Data frame handling\nI0120 12:12:52.612497    2775 log.go:172] (0xc0001c2dc0) (3) Data frame sent\nI0120 12:12:52.886903    2775 log.go:172] (0xc000152630) Data frame received for 1\nI0120 12:12:52.887046    2775 log.go:172] (0xc0000fe640) (1) Data frame handling\nI0120 12:12:52.887091    2775 log.go:172] (0xc0000fe640) (1) Data frame sent\nI0120 12:12:52.887400    2775 log.go:172] (0xc000152630) (0xc0000fe640) Stream removed, broadcasting: 1\nI0120 12:12:52.888787    2775 log.go:172] (0xc000152630) (0xc0001c2dc0) Stream removed, broadcasting: 3\nI0120 12:12:52.889121    2775 log.go:172] (0xc000152630) (0xc0001bc000) Stream removed, broadcasting: 5\nI0120 12:12:52.889221    2775 log.go:172] (0xc000152630) (0xc0000fe640) Stream removed, broadcasting: 1\nI0120 12:12:52.889284    2775 log.go:172] (0xc000152630) (0xc0001c2dc0) Stream removed, broadcasting: 3\nI0120 12:12:52.889310    2775 log.go:172] (0xc000152630) (0xc0001bc000) Stream removed, broadcasting: 5\n"
Jan 20 12:12:52.902: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 20 12:12:52.902: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 20 12:12:52.902: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 20 12:13:22.954: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2r5b8
Jan 20 12:13:22.965: INFO: Scaling statefulset ss to 0
Jan 20 12:13:22.989: INFO: Waiting for statefulset status.replicas updated to 0
Jan 20 12:13:22.995: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:13:23.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-2r5b8" for this suite.
Jan 20 12:13:31.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:13:31.205: INFO: namespace: e2e-tests-statefulset-2r5b8, resource: bindings, ignored listing per whitelist
Jan 20 12:13:31.254: INFO: namespace e2e-tests-statefulset-2r5b8 deletion completed in 8.229693124s

• [SLOW TEST:125.033 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:13:31.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 20 12:13:31.531: INFO: Waiting up to 5m0s for pod "downwardapi-volume-46082a6e-3b7e-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-jqrjh" to be "success or failure"
Jan 20 12:13:31.542: INFO: Pod "downwardapi-volume-46082a6e-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.781399ms
Jan 20 12:13:33.862: INFO: Pod "downwardapi-volume-46082a6e-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.330530154s
Jan 20 12:13:35.922: INFO: Pod "downwardapi-volume-46082a6e-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.390976415s
Jan 20 12:13:37.937: INFO: Pod "downwardapi-volume-46082a6e-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.405282147s
Jan 20 12:13:40.131: INFO: Pod "downwardapi-volume-46082a6e-3b7e-11ea-8bde-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.600156768s
Jan 20 12:13:42.709: INFO: Pod "downwardapi-volume-46082a6e-3b7e-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.177555877s
STEP: Saw pod success
Jan 20 12:13:42.709: INFO: Pod "downwardapi-volume-46082a6e-3b7e-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:13:42.731: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-46082a6e-3b7e-11ea-8bde-0242ac110005 container client-container: 
STEP: delete the pod
Jan 20 12:13:42.922: INFO: Waiting for pod downwardapi-volume-46082a6e-3b7e-11ea-8bde-0242ac110005 to disappear
Jan 20 12:13:42.942: INFO: Pod downwardapi-volume-46082a6e-3b7e-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:13:42.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jqrjh" for this suite.
Jan 20 12:13:49.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:13:49.096: INFO: namespace: e2e-tests-projected-jqrjh, resource: bindings, ignored listing per whitelist
Jan 20 12:13:49.195: INFO: namespace e2e-tests-projected-jqrjh deletion completed in 6.244633854s

• [SLOW TEST:17.940 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:13:49.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jan 20 12:13:49.428: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 20 12:13:49.440: INFO: Waiting for terminating namespaces to be deleted...
Jan 20 12:13:49.445: INFO: 
Logging pods the kubelet thinks is on node hunter-server-hu5at5svl7ps before test
Jan 20 12:13:49.462: INFO: kube-controller-manager-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 20 12:13:49.462: INFO: kube-apiserver-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 20 12:13:49.462: INFO: kube-scheduler-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 20 12:13:49.462: INFO: coredns-54ff9cd656-79kxx from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 20 12:13:49.462: INFO: 	Container coredns ready: true, restart count 0
Jan 20 12:13:49.462: INFO: kube-proxy-bqnnz from kube-system started at 2019-08-04 08:33:23 +0000 UTC (1 container statuses recorded)
Jan 20 12:13:49.462: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 20 12:13:49.462: INFO: etcd-hunter-server-hu5at5svl7ps from kube-system started at  (0 container statuses recorded)
Jan 20 12:13:49.462: INFO: weave-net-tqwf2 from kube-system started at 2019-08-04 08:33:23 +0000 UTC (2 container statuses recorded)
Jan 20 12:13:49.462: INFO: 	Container weave ready: true, restart count 0
Jan 20 12:13:49.462: INFO: 	Container weave-npc ready: true, restart count 0
Jan 20 12:13:49.462: INFO: coredns-54ff9cd656-bmkk4 from kube-system started at 2019-08-04 08:33:46 +0000 UTC (1 container statuses recorded)
Jan 20 12:13:49.462: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15eb9747f3d499dc], Reason = [FailedScheduling], Message = [0/1 nodes are available: 1 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:13:50.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-zwzk5" for this suite.
Jan 20 12:13:56.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:13:56.793: INFO: namespace: e2e-tests-sched-pred-zwzk5, resource: bindings, ignored listing per whitelist
Jan 20 12:13:56.933: INFO: namespace e2e-tests-sched-pred-zwzk5 deletion completed in 6.25873886s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.738 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:13:56.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 20 12:13:57.169: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5550c688-3b7e-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-nqmmv" to be "success or failure"
Jan 20 12:13:57.184: INFO: Pod "downwardapi-volume-5550c688-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 14.221564ms
Jan 20 12:13:59.197: INFO: Pod "downwardapi-volume-5550c688-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027077295s
Jan 20 12:14:01.215: INFO: Pod "downwardapi-volume-5550c688-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045776608s
Jan 20 12:14:03.330: INFO: Pod "downwardapi-volume-5550c688-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.160087725s
Jan 20 12:14:05.340: INFO: Pod "downwardapi-volume-5550c688-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170441921s
Jan 20 12:14:07.351: INFO: Pod "downwardapi-volume-5550c688-3b7e-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.181304083s
STEP: Saw pod success
Jan 20 12:14:07.351: INFO: Pod "downwardapi-volume-5550c688-3b7e-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:14:07.359: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-5550c688-3b7e-11ea-8bde-0242ac110005 container client-container: 
STEP: delete the pod
Jan 20 12:14:08.272: INFO: Waiting for pod downwardapi-volume-5550c688-3b7e-11ea-8bde-0242ac110005 to disappear
Jan 20 12:14:08.288: INFO: Pod downwardapi-volume-5550c688-3b7e-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:14:08.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-nqmmv" for this suite.
Jan 20 12:14:14.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:14:14.674: INFO: namespace: e2e-tests-projected-nqmmv, resource: bindings, ignored listing per whitelist
Jan 20 12:14:14.742: INFO: namespace e2e-tests-projected-nqmmv deletion completed in 6.447637308s

• [SLOW TEST:17.808 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:14:14.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 20 12:14:14.991: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jan 20 12:14:14.999: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-7lgj5/daemonsets","resourceVersion":"18850671"},"items":null}

Jan 20 12:14:15.004: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-7lgj5/pods","resourceVersion":"18850671"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:14:15.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-7lgj5" for this suite.
Jan 20 12:14:21.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:14:21.215: INFO: namespace: e2e-tests-daemonsets-7lgj5, resource: bindings, ignored listing per whitelist
Jan 20 12:14:21.261: INFO: namespace e2e-tests-daemonsets-7lgj5 deletion completed in 6.188318901s

S [SKIPPING] [6.519 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jan 20 12:14:14.991: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:14:21.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 20 12:14:21.646: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 20 12:14:26.672: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:14:28.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-h4qvp" for this suite.
Jan 20 12:14:39.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:14:40.104: INFO: namespace: e2e-tests-replication-controller-h4qvp, resource: bindings, ignored listing per whitelist
Jan 20 12:14:40.166: INFO: namespace e2e-tests-replication-controller-h4qvp deletion completed in 12.138815071s

• [SLOW TEST:18.905 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:14:40.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 20 12:14:51.413: INFO: Successfully updated pod "labelsupdate6f4eefbe-3b7e-11ea-8bde-0242ac110005"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:14:53.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-wfl6m" for this suite.
Jan 20 12:15:19.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:15:19.646: INFO: namespace: e2e-tests-downward-api-wfl6m, resource: bindings, ignored listing per whitelist
Jan 20 12:15:19.832: INFO: namespace e2e-tests-downward-api-wfl6m deletion completed in 26.291206775s

• [SLOW TEST:39.665 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:15:19.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jan 20 12:15:20.043: INFO: Waiting up to 5m0s for pod "var-expansion-86b6a2e6-3b7e-11ea-8bde-0242ac110005" in namespace "e2e-tests-var-expansion-rkgl5" to be "success or failure"
Jan 20 12:15:20.051: INFO: Pod "var-expansion-86b6a2e6-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.47614ms
Jan 20 12:15:22.074: INFO: Pod "var-expansion-86b6a2e6-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03097797s
Jan 20 12:15:24.094: INFO: Pod "var-expansion-86b6a2e6-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050940414s
Jan 20 12:15:26.107: INFO: Pod "var-expansion-86b6a2e6-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064881538s
Jan 20 12:15:28.121: INFO: Pod "var-expansion-86b6a2e6-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07821605s
Jan 20 12:15:30.207: INFO: Pod "var-expansion-86b6a2e6-3b7e-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.164642073s
STEP: Saw pod success
Jan 20 12:15:30.207: INFO: Pod "var-expansion-86b6a2e6-3b7e-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:15:30.214: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod var-expansion-86b6a2e6-3b7e-11ea-8bde-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 20 12:15:30.435: INFO: Waiting for pod var-expansion-86b6a2e6-3b7e-11ea-8bde-0242ac110005 to disappear
Jan 20 12:15:30.455: INFO: Pod var-expansion-86b6a2e6-3b7e-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:15:30.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-rkgl5" for this suite.
Jan 20 12:15:36.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:15:36.641: INFO: namespace: e2e-tests-var-expansion-rkgl5, resource: bindings, ignored listing per whitelist
Jan 20 12:15:36.746: INFO: namespace e2e-tests-var-expansion-rkgl5 deletion completed in 6.270030075s

• [SLOW TEST:16.913 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:15:36.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 20 12:15:36.958: INFO: Waiting up to 5m0s for pod "downward-api-90ca2f37-3b7e-11ea-8bde-0242ac110005" in namespace "e2e-tests-downward-api-z5ccx" to be "success or failure"
Jan 20 12:15:36.974: INFO: Pod "downward-api-90ca2f37-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.872658ms
Jan 20 12:15:38.994: INFO: Pod "downward-api-90ca2f37-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035697137s
Jan 20 12:15:41.024: INFO: Pod "downward-api-90ca2f37-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065082412s
Jan 20 12:15:43.232: INFO: Pod "downward-api-90ca2f37-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.273704736s
Jan 20 12:15:45.267: INFO: Pod "downward-api-90ca2f37-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.308332524s
Jan 20 12:15:47.469: INFO: Pod "downward-api-90ca2f37-3b7e-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.510335301s
STEP: Saw pod success
Jan 20 12:15:47.469: INFO: Pod "downward-api-90ca2f37-3b7e-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:15:47.480: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-90ca2f37-3b7e-11ea-8bde-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 20 12:15:47.561: INFO: Waiting for pod downward-api-90ca2f37-3b7e-11ea-8bde-0242ac110005 to disappear
Jan 20 12:15:47.645: INFO: Pod downward-api-90ca2f37-3b7e-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:15:47.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-z5ccx" for this suite.
Jan 20 12:15:53.690: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:15:53.894: INFO: namespace: e2e-tests-downward-api-z5ccx, resource: bindings, ignored listing per whitelist
Jan 20 12:15:53.913: INFO: namespace e2e-tests-downward-api-z5ccx deletion completed in 6.254250525s

• [SLOW TEST:17.168 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:15:53.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 20 12:16:03.050: INFO: Successfully updated pod "labelsupdate9b106331-3b7e-11ea-8bde-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:16:05.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-sv8lf" for this suite.
Jan 20 12:16:27.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:16:27.500: INFO: namespace: e2e-tests-projected-sv8lf, resource: bindings, ignored listing per whitelist
Jan 20 12:16:27.668: INFO: namespace e2e-tests-projected-sv8lf deletion completed in 22.383219766s

• [SLOW TEST:33.753 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:16:27.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 20 12:16:27.963: INFO: Waiting up to 5m0s for pod "pod-af279807-3b7e-11ea-8bde-0242ac110005" in namespace "e2e-tests-emptydir-jjspw" to be "success or failure"
Jan 20 12:16:27.974: INFO: Pod "pod-af279807-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.655328ms
Jan 20 12:16:30.002: INFO: Pod "pod-af279807-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03876754s
Jan 20 12:16:32.025: INFO: Pod "pod-af279807-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061872323s
Jan 20 12:16:34.046: INFO: Pod "pod-af279807-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083061471s
Jan 20 12:16:36.067: INFO: Pod "pod-af279807-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104067088s
Jan 20 12:16:38.784: INFO: Pod "pod-af279807-3b7e-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.820701215s
STEP: Saw pod success
Jan 20 12:16:38.784: INFO: Pod "pod-af279807-3b7e-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:16:38.803: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-af279807-3b7e-11ea-8bde-0242ac110005 container test-container: 
STEP: delete the pod
Jan 20 12:16:39.236: INFO: Waiting for pod pod-af279807-3b7e-11ea-8bde-0242ac110005 to disappear
Jan 20 12:16:39.248: INFO: Pod pod-af279807-3b7e-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:16:39.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-jjspw" for this suite.
Jan 20 12:16:45.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:16:45.361: INFO: namespace: e2e-tests-emptydir-jjspw, resource: bindings, ignored listing per whitelist
Jan 20 12:16:45.524: INFO: namespace e2e-tests-emptydir-jjspw deletion completed in 6.266133847s

• [SLOW TEST:17.856 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:16:45.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-b9c09bf4-3b7e-11ea-8bde-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 20 12:16:45.673: INFO: Waiting up to 5m0s for pod "pod-secrets-b9c244f4-3b7e-11ea-8bde-0242ac110005" in namespace "e2e-tests-secrets-hcmcv" to be "success or failure"
Jan 20 12:16:45.817: INFO: Pod "pod-secrets-b9c244f4-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 144.095744ms
Jan 20 12:16:47.835: INFO: Pod "pod-secrets-b9c244f4-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161695898s
Jan 20 12:16:49.872: INFO: Pod "pod-secrets-b9c244f4-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198519484s
Jan 20 12:16:51.951: INFO: Pod "pod-secrets-b9c244f4-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.277555053s
Jan 20 12:16:54.058: INFO: Pod "pod-secrets-b9c244f4-3b7e-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.384347029s
Jan 20 12:16:56.069: INFO: Pod "pod-secrets-b9c244f4-3b7e-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.396206304s
STEP: Saw pod success
Jan 20 12:16:56.070: INFO: Pod "pod-secrets-b9c244f4-3b7e-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:16:56.145: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-b9c244f4-3b7e-11ea-8bde-0242ac110005 container secret-volume-test: 
STEP: delete the pod
Jan 20 12:16:56.214: INFO: Waiting for pod pod-secrets-b9c244f4-3b7e-11ea-8bde-0242ac110005 to disappear
Jan 20 12:16:56.219: INFO: Pod pod-secrets-b9c244f4-3b7e-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:16:56.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-hcmcv" for this suite.
Jan 20 12:17:04.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:17:04.406: INFO: namespace: e2e-tests-secrets-hcmcv, resource: bindings, ignored listing per whitelist
Jan 20 12:17:04.420: INFO: namespace e2e-tests-secrets-hcmcv deletion completed in 8.193815965s

• [SLOW TEST:18.896 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:17:04.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-b4j9l
Jan 20 12:17:14.797: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-b4j9l
STEP: checking the pod's current state and verifying that restartCount is present
Jan 20 12:17:14.803: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:21:15.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-b4j9l" for this suite.
Jan 20 12:21:21.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:21:21.408: INFO: namespace: e2e-tests-container-probe-b4j9l, resource: bindings, ignored listing per whitelist
Jan 20 12:21:21.439: INFO: namespace e2e-tests-container-probe-b4j9l deletion completed in 6.301083946s

• [SLOW TEST:257.018 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:21:21.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jan 20 12:21:21.663: INFO: Waiting up to 5m0s for pod "downward-api-5e415d87-3b7f-11ea-8bde-0242ac110005" in namespace "e2e-tests-downward-api-r2xsn" to be "success or failure"
Jan 20 12:21:21.684: INFO: Pod "downward-api-5e415d87-3b7f-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.108411ms
Jan 20 12:21:23.708: INFO: Pod "downward-api-5e415d87-3b7f-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045059138s
Jan 20 12:21:25.744: INFO: Pod "downward-api-5e415d87-3b7f-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080348383s
Jan 20 12:21:27.773: INFO: Pod "downward-api-5e415d87-3b7f-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110028118s
Jan 20 12:21:29.805: INFO: Pod "downward-api-5e415d87-3b7f-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.141152403s
Jan 20 12:21:31.861: INFO: Pod "downward-api-5e415d87-3b7f-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.197547427s
STEP: Saw pod success
Jan 20 12:21:31.861: INFO: Pod "downward-api-5e415d87-3b7f-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:21:31.876: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downward-api-5e415d87-3b7f-11ea-8bde-0242ac110005 container dapi-container: 
STEP: delete the pod
Jan 20 12:21:32.102: INFO: Waiting for pod downward-api-5e415d87-3b7f-11ea-8bde-0242ac110005 to disappear
Jan 20 12:21:32.166: INFO: Pod downward-api-5e415d87-3b7f-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:21:32.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-r2xsn" for this suite.
Jan 20 12:21:38.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:21:38.406: INFO: namespace: e2e-tests-downward-api-r2xsn, resource: bindings, ignored listing per whitelist
Jan 20 12:21:38.455: INFO: namespace e2e-tests-downward-api-r2xsn deletion completed in 6.278478896s

• [SLOW TEST:17.016 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:21:38.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-l9hh2
I0120 12:21:38.779176       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-l9hh2, replica count: 1
I0120 12:21:39.829760       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 12:21:40.830058       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 12:21:41.830496       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 12:21:42.830887       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 12:21:43.831247       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 12:21:44.831800       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 12:21:45.832169       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 12:21:46.832492       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0120 12:21:47.832794       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 20 12:21:48.044: INFO: Created: latency-svc-tfwx7
Jan 20 12:21:48.100: INFO: Got endpoints: latency-svc-tfwx7 [166.955836ms]
Jan 20 12:21:48.219: INFO: Created: latency-svc-jk8zb
Jan 20 12:21:48.262: INFO: Got endpoints: latency-svc-jk8zb [161.134653ms]
Jan 20 12:21:48.321: INFO: Created: latency-svc-pjd8d
Jan 20 12:21:48.380: INFO: Got endpoints: latency-svc-pjd8d [279.442148ms]
Jan 20 12:21:48.401: INFO: Created: latency-svc-r4q2k
Jan 20 12:21:48.622: INFO: Got endpoints: latency-svc-r4q2k [521.712188ms]
Jan 20 12:21:48.634: INFO: Created: latency-svc-svrwd
Jan 20 12:21:48.691: INFO: Got endpoints: latency-svc-svrwd [590.267429ms]
Jan 20 12:21:48.824: INFO: Created: latency-svc-qhg8x
Jan 20 12:21:48.835: INFO: Got endpoints: latency-svc-qhg8x [734.32973ms]
Jan 20 12:21:49.082: INFO: Created: latency-svc-cbsl6
Jan 20 12:21:49.096: INFO: Got endpoints: latency-svc-cbsl6 [995.596537ms]
Jan 20 12:21:49.276: INFO: Created: latency-svc-4cj5c
Jan 20 12:21:49.507: INFO: Got endpoints: latency-svc-4cj5c [1.406484671s]
Jan 20 12:21:49.515: INFO: Created: latency-svc-lzm7s
Jan 20 12:21:49.532: INFO: Got endpoints: latency-svc-lzm7s [1.43051422s]
Jan 20 12:21:49.666: INFO: Created: latency-svc-5bkc7
Jan 20 12:21:49.747: INFO: Created: latency-svc-btqgp
Jan 20 12:21:49.758: INFO: Got endpoints: latency-svc-5bkc7 [1.657406309s]
Jan 20 12:21:49.874: INFO: Got endpoints: latency-svc-btqgp [1.77340951s]
Jan 20 12:21:49.885: INFO: Created: latency-svc-shfrf
Jan 20 12:21:50.079: INFO: Got endpoints: latency-svc-shfrf [1.978077641s]
Jan 20 12:21:50.093: INFO: Created: latency-svc-pxd5n
Jan 20 12:21:50.120: INFO: Got endpoints: latency-svc-pxd5n [2.019365964s]
Jan 20 12:21:50.301: INFO: Created: latency-svc-hmvk9
Jan 20 12:21:50.301: INFO: Got endpoints: latency-svc-hmvk9 [2.201085288s]
Jan 20 12:21:50.437: INFO: Created: latency-svc-5f9fx
Jan 20 12:21:50.480: INFO: Got endpoints: latency-svc-5f9fx [2.380360428s]
Jan 20 12:21:50.742: INFO: Created: latency-svc-chshx
Jan 20 12:21:50.788: INFO: Got endpoints: latency-svc-chshx [2.687963252s]
Jan 20 12:21:50.856: INFO: Created: latency-svc-d9hmn
Jan 20 12:21:50.939: INFO: Got endpoints: latency-svc-d9hmn [2.676810751s]
Jan 20 12:21:51.003: INFO: Created: latency-svc-qbwdg
Jan 20 12:21:51.013: INFO: Got endpoints: latency-svc-qbwdg [2.632091265s]
Jan 20 12:21:51.164: INFO: Created: latency-svc-lxbs6
Jan 20 12:21:51.205: INFO: Got endpoints: latency-svc-lxbs6 [2.583398627s]
Jan 20 12:21:51.364: INFO: Created: latency-svc-l7gcm
Jan 20 12:21:51.447: INFO: Created: latency-svc-9bnw7
Jan 20 12:21:51.448: INFO: Got endpoints: latency-svc-l7gcm [2.756323466s]
Jan 20 12:21:51.536: INFO: Got endpoints: latency-svc-9bnw7 [2.700835147s]
Jan 20 12:21:51.561: INFO: Created: latency-svc-gbv7w
Jan 20 12:21:51.582: INFO: Got endpoints: latency-svc-gbv7w [2.48512638s]
Jan 20 12:21:51.726: INFO: Created: latency-svc-hlpgm
Jan 20 12:21:51.739: INFO: Got endpoints: latency-svc-hlpgm [2.231525122s]
Jan 20 12:21:52.166: INFO: Created: latency-svc-9ng4q
Jan 20 12:21:52.265: INFO: Got endpoints: latency-svc-9ng4q [2.732986076s]
Jan 20 12:21:52.353: INFO: Created: latency-svc-tmnlg
Jan 20 12:21:52.400: INFO: Got endpoints: latency-svc-tmnlg [2.6409967s]
Jan 20 12:21:52.647: INFO: Created: latency-svc-dhgb6
Jan 20 12:21:52.665: INFO: Got endpoints: latency-svc-dhgb6 [2.790731907s]
Jan 20 12:21:52.846: INFO: Created: latency-svc-69crm
Jan 20 12:21:52.858: INFO: Got endpoints: latency-svc-69crm [2.778511524s]
Jan 20 12:21:53.022: INFO: Created: latency-svc-cnqcb
Jan 20 12:21:53.044: INFO: Got endpoints: latency-svc-cnqcb [2.923229967s]
Jan 20 12:21:53.242: INFO: Created: latency-svc-krbmv
Jan 20 12:21:53.242: INFO: Got endpoints: latency-svc-krbmv [2.941165891s]
Jan 20 12:21:53.324: INFO: Created: latency-svc-f7snk
Jan 20 12:21:53.450: INFO: Got endpoints: latency-svc-f7snk [2.969066235s]
Jan 20 12:21:53.491: INFO: Created: latency-svc-zdp5p
Jan 20 12:21:53.529: INFO: Got endpoints: latency-svc-zdp5p [2.741214024s]
Jan 20 12:21:53.657: INFO: Created: latency-svc-kw4hq
Jan 20 12:21:53.707: INFO: Got endpoints: latency-svc-kw4hq [2.768248773s]
Jan 20 12:21:53.761: INFO: Created: latency-svc-5lbds
Jan 20 12:21:53.955: INFO: Got endpoints: latency-svc-5lbds [2.9424951s]
Jan 20 12:21:53.963: INFO: Created: latency-svc-v6n7b
Jan 20 12:21:53.988: INFO: Got endpoints: latency-svc-v6n7b [2.782890648s]
Jan 20 12:21:54.191: INFO: Created: latency-svc-l6dxw
Jan 20 12:21:54.209: INFO: Got endpoints: latency-svc-l6dxw [2.761169896s]
Jan 20 12:21:54.398: INFO: Created: latency-svc-pqkjm
Jan 20 12:21:54.428: INFO: Got endpoints: latency-svc-pqkjm [2.891173048s]
Jan 20 12:21:54.629: INFO: Created: latency-svc-xbq8z
Jan 20 12:21:54.640: INFO: Got endpoints: latency-svc-xbq8z [3.058550487s]
Jan 20 12:21:54.846: INFO: Created: latency-svc-ql8dm
Jan 20 12:21:54.848: INFO: Got endpoints: latency-svc-ql8dm [3.108495201s]
Jan 20 12:21:54.929: INFO: Created: latency-svc-chgzk
Jan 20 12:21:55.051: INFO: Got endpoints: latency-svc-chgzk [2.786374162s]
Jan 20 12:21:55.123: INFO: Created: latency-svc-vclwl
Jan 20 12:21:55.150: INFO: Got endpoints: latency-svc-vclwl [2.750115098s]
Jan 20 12:21:55.291: INFO: Created: latency-svc-sjfgj
Jan 20 12:21:55.303: INFO: Got endpoints: latency-svc-sjfgj [251.580349ms]
Jan 20 12:21:55.389: INFO: Created: latency-svc-mrqjd
Jan 20 12:21:55.477: INFO: Got endpoints: latency-svc-mrqjd [2.81135141s]
Jan 20 12:21:55.514: INFO: Created: latency-svc-p8zfz
Jan 20 12:21:55.538: INFO: Got endpoints: latency-svc-p8zfz [2.68003955s]
Jan 20 12:21:55.823: INFO: Created: latency-svc-4pxs7
Jan 20 12:21:55.878: INFO: Got endpoints: latency-svc-4pxs7 [2.83427531s]
Jan 20 12:21:56.002: INFO: Created: latency-svc-xhvl9
Jan 20 12:21:56.052: INFO: Got endpoints: latency-svc-xhvl9 [2.809610049s]
Jan 20 12:21:56.197: INFO: Created: latency-svc-ghdcx
Jan 20 12:21:56.222: INFO: Got endpoints: latency-svc-ghdcx [2.771862019s]
Jan 20 12:21:56.273: INFO: Created: latency-svc-qpjs4
Jan 20 12:21:56.467: INFO: Got endpoints: latency-svc-qpjs4 [2.938038953s]
Jan 20 12:21:56.513: INFO: Created: latency-svc-6bm9f
Jan 20 12:21:56.540: INFO: Got endpoints: latency-svc-6bm9f [2.832964646s]
Jan 20 12:21:56.780: INFO: Created: latency-svc-zrms7
Jan 20 12:21:56.919: INFO: Got endpoints: latency-svc-zrms7 [2.964084621s]
Jan 20 12:21:56.932: INFO: Created: latency-svc-4vcj9
Jan 20 12:21:56.956: INFO: Got endpoints: latency-svc-4vcj9 [2.967252202s]
Jan 20 12:21:57.171: INFO: Created: latency-svc-cvg2r
Jan 20 12:21:57.197: INFO: Got endpoints: latency-svc-cvg2r [2.987386476s]
Jan 20 12:21:57.255: INFO: Created: latency-svc-2pq7g
Jan 20 12:21:57.362: INFO: Got endpoints: latency-svc-2pq7g [2.934357947s]
Jan 20 12:21:57.396: INFO: Created: latency-svc-fkwbx
Jan 20 12:21:57.408: INFO: Got endpoints: latency-svc-fkwbx [2.767335976s]
Jan 20 12:21:57.457: INFO: Created: latency-svc-fnfmt
Jan 20 12:21:57.540: INFO: Got endpoints: latency-svc-fnfmt [2.691727708s]
Jan 20 12:21:57.584: INFO: Created: latency-svc-ws7pr
Jan 20 12:21:57.601: INFO: Got endpoints: latency-svc-ws7pr [2.450752174s]
Jan 20 12:21:57.732: INFO: Created: latency-svc-6fhkj
Jan 20 12:21:57.764: INFO: Got endpoints: latency-svc-6fhkj [2.460623448s]
Jan 20 12:21:57.938: INFO: Created: latency-svc-t8g9s
Jan 20 12:21:57.946: INFO: Got endpoints: latency-svc-t8g9s [2.469116697s]
Jan 20 12:21:58.124: INFO: Created: latency-svc-25ggv
Jan 20 12:21:58.157: INFO: Got endpoints: latency-svc-25ggv [2.617984281s]
Jan 20 12:21:58.329: INFO: Created: latency-svc-v49bc
Jan 20 12:21:58.336: INFO: Got endpoints: latency-svc-v49bc [2.457821585s]
Jan 20 12:21:58.447: INFO: Created: latency-svc-79p8r
Jan 20 12:21:58.496: INFO: Got endpoints: latency-svc-79p8r [2.44444783s]
Jan 20 12:21:58.721: INFO: Created: latency-svc-f2hm8
Jan 20 12:21:58.878: INFO: Got endpoints: latency-svc-f2hm8 [2.656210765s]
Jan 20 12:21:58.905: INFO: Created: latency-svc-9wdxb
Jan 20 12:21:58.939: INFO: Got endpoints: latency-svc-9wdxb [2.471768791s]
Jan 20 12:21:59.083: INFO: Created: latency-svc-nzzkc
Jan 20 12:21:59.091: INFO: Got endpoints: latency-svc-nzzkc [2.550309938s]
Jan 20 12:21:59.301: INFO: Created: latency-svc-42xnc
Jan 20 12:21:59.319: INFO: Got endpoints: latency-svc-42xnc [2.398995736s]
Jan 20 12:21:59.455: INFO: Created: latency-svc-rvg2z
Jan 20 12:21:59.477: INFO: Got endpoints: latency-svc-rvg2z [2.521061364s]
Jan 20 12:21:59.536: INFO: Created: latency-svc-smz2w
Jan 20 12:21:59.610: INFO: Got endpoints: latency-svc-smz2w [2.413214059s]
Jan 20 12:21:59.689: INFO: Created: latency-svc-pm2cq
Jan 20 12:21:59.697: INFO: Got endpoints: latency-svc-pm2cq [2.334356031s]
Jan 20 12:21:59.812: INFO: Created: latency-svc-zklfb
Jan 20 12:21:59.842: INFO: Got endpoints: latency-svc-zklfb [2.433993058s]
Jan 20 12:21:59.996: INFO: Created: latency-svc-h2tkc
Jan 20 12:22:00.016: INFO: Got endpoints: latency-svc-h2tkc [2.476051975s]
Jan 20 12:22:00.157: INFO: Created: latency-svc-9zq89
Jan 20 12:22:00.193: INFO: Got endpoints: latency-svc-9zq89 [2.591837057s]
Jan 20 12:22:00.306: INFO: Created: latency-svc-6gdfg
Jan 20 12:22:00.332: INFO: Got endpoints: latency-svc-6gdfg [2.56835142s]
Jan 20 12:22:00.431: INFO: Created: latency-svc-xsb2k
Jan 20 12:22:00.472: INFO: Got endpoints: latency-svc-xsb2k [2.525388243s]
Jan 20 12:22:00.674: INFO: Created: latency-svc-h9xq2
Jan 20 12:22:00.681: INFO: Got endpoints: latency-svc-h9xq2 [2.524307008s]
Jan 20 12:22:00.783: INFO: Created: latency-svc-h5wmb
Jan 20 12:22:01.386: INFO: Got endpoints: latency-svc-h5wmb [3.049264142s]
Jan 20 12:22:01.645: INFO: Created: latency-svc-9klph
Jan 20 12:22:01.704: INFO: Got endpoints: latency-svc-9klph [3.206819558s]
Jan 20 12:22:01.785: INFO: Created: latency-svc-7jsf2
Jan 20 12:22:01.949: INFO: Got endpoints: latency-svc-7jsf2 [3.070954006s]
Jan 20 12:22:01.954: INFO: Created: latency-svc-ppqlr
Jan 20 12:22:01.968: INFO: Got endpoints: latency-svc-ppqlr [3.028379551s]
Jan 20 12:22:02.032: INFO: Created: latency-svc-vq87p
Jan 20 12:22:02.145: INFO: Got endpoints: latency-svc-vq87p [3.053579147s]
Jan 20 12:22:02.180: INFO: Created: latency-svc-h69g7
Jan 20 12:22:02.190: INFO: Got endpoints: latency-svc-h69g7 [2.871045603s]
Jan 20 12:22:02.316: INFO: Created: latency-svc-w47xr
Jan 20 12:22:02.329: INFO: Got endpoints: latency-svc-w47xr [2.851642351s]
Jan 20 12:22:02.476: INFO: Created: latency-svc-sbr4v
Jan 20 12:22:02.500: INFO: Got endpoints: latency-svc-sbr4v [2.889308467s]
Jan 20 12:22:02.716: INFO: Created: latency-svc-t7s5t
Jan 20 12:22:02.719: INFO: Got endpoints: latency-svc-t7s5t [3.021930095s]
Jan 20 12:22:02.917: INFO: Created: latency-svc-jv5r7
Jan 20 12:22:02.954: INFO: Got endpoints: latency-svc-jv5r7 [3.112025309s]
Jan 20 12:22:03.091: INFO: Created: latency-svc-f224j
Jan 20 12:22:03.157: INFO: Got endpoints: latency-svc-f224j [3.141327806s]
Jan 20 12:22:03.171: INFO: Created: latency-svc-sm845
Jan 20 12:22:03.271: INFO: Got endpoints: latency-svc-sm845 [3.077856549s]
Jan 20 12:22:03.318: INFO: Created: latency-svc-kbrx2
Jan 20 12:22:03.331: INFO: Got endpoints: latency-svc-kbrx2 [2.998150011s]
Jan 20 12:22:03.452: INFO: Created: latency-svc-g9gf9
Jan 20 12:22:03.478: INFO: Got endpoints: latency-svc-g9gf9 [3.006647679s]
Jan 20 12:22:03.734: INFO: Created: latency-svc-lqmgb
Jan 20 12:22:03.807: INFO: Got endpoints: latency-svc-lqmgb [3.125412431s]
Jan 20 12:22:03.978: INFO: Created: latency-svc-gwmzq
Jan 20 12:22:04.145: INFO: Got endpoints: latency-svc-gwmzq [2.759114344s]
Jan 20 12:22:04.208: INFO: Created: latency-svc-vb4zp
Jan 20 12:22:04.229: INFO: Got endpoints: latency-svc-vb4zp [2.524851894s]
Jan 20 12:22:04.462: INFO: Created: latency-svc-rpvvr
Jan 20 12:22:04.493: INFO: Got endpoints: latency-svc-rpvvr [2.543278154s]
Jan 20 12:22:04.725: INFO: Created: latency-svc-nb6vd
Jan 20 12:22:04.873: INFO: Got endpoints: latency-svc-nb6vd [2.904963952s]
Jan 20 12:22:04.958: INFO: Created: latency-svc-jhq6d
Jan 20 12:22:05.050: INFO: Got endpoints: latency-svc-jhq6d [2.904711967s]
Jan 20 12:22:05.080: INFO: Created: latency-svc-vnmcn
Jan 20 12:22:05.088: INFO: Got endpoints: latency-svc-vnmcn [2.89817541s]
Jan 20 12:22:05.129: INFO: Created: latency-svc-n7rkm
Jan 20 12:22:05.140: INFO: Got endpoints: latency-svc-n7rkm [2.810722127s]
Jan 20 12:22:05.258: INFO: Created: latency-svc-2lls4
Jan 20 12:22:05.276: INFO: Got endpoints: latency-svc-2lls4 [2.776673699s]
Jan 20 12:22:05.390: INFO: Created: latency-svc-9cfc9
Jan 20 12:22:05.419: INFO: Got endpoints: latency-svc-9cfc9 [2.699661565s]
Jan 20 12:22:05.477: INFO: Created: latency-svc-xv6lc
Jan 20 12:22:05.553: INFO: Got endpoints: latency-svc-xv6lc [2.598349585s]
Jan 20 12:22:05.610: INFO: Created: latency-svc-tq7sq
Jan 20 12:22:05.709: INFO: Got endpoints: latency-svc-tq7sq [2.551363087s]
Jan 20 12:22:05.720: INFO: Created: latency-svc-5b7kp
Jan 20 12:22:05.764: INFO: Got endpoints: latency-svc-5b7kp [2.493209423s]
Jan 20 12:22:05.973: INFO: Created: latency-svc-ntml7
Jan 20 12:22:05.973: INFO: Got endpoints: latency-svc-ntml7 [2.642478137s]
Jan 20 12:22:06.095: INFO: Created: latency-svc-ln58z
Jan 20 12:22:06.105: INFO: Got endpoints: latency-svc-ln58z [2.626785484s]
Jan 20 12:22:06.163: INFO: Created: latency-svc-d6ckc
Jan 20 12:22:06.248: INFO: Got endpoints: latency-svc-d6ckc [2.441466381s]
Jan 20 12:22:06.270: INFO: Created: latency-svc-ff4mg
Jan 20 12:22:06.544: INFO: Created: latency-svc-nm7cz
Jan 20 12:22:06.590: INFO: Got endpoints: latency-svc-ff4mg [2.444819426s]
Jan 20 12:22:06.780: INFO: Got endpoints: latency-svc-nm7cz [2.551130684s]
Jan 20 12:22:06.865: INFO: Created: latency-svc-mmwl9
Jan 20 12:22:07.008: INFO: Got endpoints: latency-svc-mmwl9 [2.514381349s]
Jan 20 12:22:07.034: INFO: Created: latency-svc-fl959
Jan 20 12:22:07.049: INFO: Got endpoints: latency-svc-fl959 [2.175924729s]
Jan 20 12:22:07.165: INFO: Created: latency-svc-bkqrg
Jan 20 12:22:07.180: INFO: Got endpoints: latency-svc-bkqrg [2.129752793s]
Jan 20 12:22:07.254: INFO: Created: latency-svc-wqnvf
Jan 20 12:22:07.384: INFO: Got endpoints: latency-svc-wqnvf [2.295778061s]
Jan 20 12:22:07.414: INFO: Created: latency-svc-878km
Jan 20 12:22:07.433: INFO: Got endpoints: latency-svc-878km [2.293082467s]
Jan 20 12:22:07.571: INFO: Created: latency-svc-d52fd
Jan 20 12:22:07.573: INFO: Got endpoints: latency-svc-d52fd [2.296355052s]
Jan 20 12:22:07.794: INFO: Created: latency-svc-l4pkb
Jan 20 12:22:07.829: INFO: Got endpoints: latency-svc-l4pkb [2.409937587s]
Jan 20 12:22:08.047: INFO: Created: latency-svc-bgwn8
Jan 20 12:22:08.068: INFO: Got endpoints: latency-svc-bgwn8 [2.515266976s]
Jan 20 12:22:08.202: INFO: Created: latency-svc-phz2l
Jan 20 12:22:08.317: INFO: Got endpoints: latency-svc-phz2l [2.608131202s]
Jan 20 12:22:08.350: INFO: Created: latency-svc-jbqch
Jan 20 12:22:08.350: INFO: Got endpoints: latency-svc-jbqch [2.586085683s]
Jan 20 12:22:08.411: INFO: Created: latency-svc-bh548
Jan 20 12:22:08.507: INFO: Got endpoints: latency-svc-bh548 [2.533841752s]
Jan 20 12:22:08.625: INFO: Created: latency-svc-hqr7s
Jan 20 12:22:08.721: INFO: Got endpoints: latency-svc-hqr7s [2.615893057s]
Jan 20 12:22:08.754: INFO: Created: latency-svc-z9stj
Jan 20 12:22:08.783: INFO: Got endpoints: latency-svc-z9stj [2.534418189s]
Jan 20 12:22:08.921: INFO: Created: latency-svc-gp44p
Jan 20 12:22:08.960: INFO: Got endpoints: latency-svc-gp44p [2.369309809s]
Jan 20 12:22:09.094: INFO: Created: latency-svc-sgssh
Jan 20 12:22:09.109: INFO: Got endpoints: latency-svc-sgssh [2.328922766s]
Jan 20 12:22:09.465: INFO: Created: latency-svc-7crw7
Jan 20 12:22:09.508: INFO: Got endpoints: latency-svc-7crw7 [2.499490016s]
Jan 20 12:22:09.677: INFO: Created: latency-svc-9szh6
Jan 20 12:22:09.819: INFO: Got endpoints: latency-svc-9szh6 [2.769686763s]
Jan 20 12:22:09.825: INFO: Created: latency-svc-9xmcn
Jan 20 12:22:09.860: INFO: Got endpoints: latency-svc-9xmcn [2.680029053s]
Jan 20 12:22:10.052: INFO: Created: latency-svc-cs99k
Jan 20 12:22:10.052: INFO: Got endpoints: latency-svc-cs99k [2.668137402s]
Jan 20 12:22:10.222: INFO: Created: latency-svc-z6cj2
Jan 20 12:22:10.244: INFO: Got endpoints: latency-svc-z6cj2 [2.811520804s]
Jan 20 12:22:10.818: INFO: Created: latency-svc-hjqz6
Jan 20 12:22:10.854: INFO: Got endpoints: latency-svc-hjqz6 [3.28049636s]
Jan 20 12:22:11.075: INFO: Created: latency-svc-xnbqr
Jan 20 12:22:11.095: INFO: Got endpoints: latency-svc-xnbqr [3.266022272s]
Jan 20 12:22:11.260: INFO: Created: latency-svc-jv49c
Jan 20 12:22:11.280: INFO: Got endpoints: latency-svc-jv49c [3.21136667s]
Jan 20 12:22:11.347: INFO: Created: latency-svc-h9k44
Jan 20 12:22:11.485: INFO: Got endpoints: latency-svc-h9k44 [3.167794002s]
Jan 20 12:22:11.531: INFO: Created: latency-svc-2z9gt
Jan 20 12:22:11.531: INFO: Got endpoints: latency-svc-2z9gt [3.180870929s]
Jan 20 12:22:11.719: INFO: Created: latency-svc-kczf2
Jan 20 12:22:11.720: INFO: Got endpoints: latency-svc-kczf2 [3.211989193s]
Jan 20 12:22:11.853: INFO: Created: latency-svc-bqfps
Jan 20 12:22:11.894: INFO: Created: latency-svc-wwvwt
Jan 20 12:22:11.920: INFO: Got endpoints: latency-svc-bqfps [3.198250935s]
Jan 20 12:22:11.933: INFO: Got endpoints: latency-svc-wwvwt [3.150005995s]
Jan 20 12:22:12.040: INFO: Created: latency-svc-8rqsx
Jan 20 12:22:12.070: INFO: Got endpoints: latency-svc-8rqsx [3.110133326s]
Jan 20 12:22:12.120: INFO: Created: latency-svc-kps4v
Jan 20 12:22:12.259: INFO: Got endpoints: latency-svc-kps4v [3.149281386s]
Jan 20 12:22:12.344: INFO: Created: latency-svc-hspp4
Jan 20 12:22:12.458: INFO: Got endpoints: latency-svc-hspp4 [2.948912387s]
Jan 20 12:22:12.511: INFO: Created: latency-svc-jxxh7
Jan 20 12:22:12.664: INFO: Created: latency-svc-654bk
Jan 20 12:22:12.664: INFO: Got endpoints: latency-svc-jxxh7 [2.845248731s]
Jan 20 12:22:12.678: INFO: Got endpoints: latency-svc-654bk [2.817611777s]
Jan 20 12:22:12.758: INFO: Created: latency-svc-qddn9
Jan 20 12:22:12.880: INFO: Got endpoints: latency-svc-qddn9 [2.827816443s]
Jan 20 12:22:12.911: INFO: Created: latency-svc-bhtsx
Jan 20 12:22:13.129: INFO: Got endpoints: latency-svc-bhtsx [2.883949254s]
Jan 20 12:22:13.207: INFO: Created: latency-svc-jpz4j
Jan 20 12:22:13.224: INFO: Got endpoints: latency-svc-jpz4j [2.369563388s]
Jan 20 12:22:13.531: INFO: Created: latency-svc-xpfx9
Jan 20 12:22:13.531: INFO: Got endpoints: latency-svc-xpfx9 [2.43614091s]
Jan 20 12:22:13.681: INFO: Created: latency-svc-nd5jf
Jan 20 12:22:13.709: INFO: Got endpoints: latency-svc-nd5jf [2.429278155s]
Jan 20 12:22:13.956: INFO: Created: latency-svc-q4jd2
Jan 20 12:22:14.007: INFO: Got endpoints: latency-svc-q4jd2 [2.521178784s]
Jan 20 12:22:14.205: INFO: Created: latency-svc-th9vl
Jan 20 12:22:14.232: INFO: Got endpoints: latency-svc-th9vl [2.700166284s]
Jan 20 12:22:14.375: INFO: Created: latency-svc-gwhc9
Jan 20 12:22:14.448: INFO: Got endpoints: latency-svc-gwhc9 [2.727903016s]
Jan 20 12:22:14.575: INFO: Created: latency-svc-64bhj
Jan 20 12:22:14.663: INFO: Got endpoints: latency-svc-64bhj [2.743032191s]
Jan 20 12:22:14.796: INFO: Created: latency-svc-mp6s4
Jan 20 12:22:14.803: INFO: Got endpoints: latency-svc-mp6s4 [2.869627601s]
Jan 20 12:22:14.872: INFO: Created: latency-svc-jh49x
Jan 20 12:22:14.980: INFO: Got endpoints: latency-svc-jh49x [2.90968873s]
Jan 20 12:22:14.994: INFO: Created: latency-svc-54hwz
Jan 20 12:22:15.004: INFO: Got endpoints: latency-svc-54hwz [2.7448329s]
Jan 20 12:22:15.063: INFO: Created: latency-svc-v2dqk
Jan 20 12:22:15.248: INFO: Got endpoints: latency-svc-v2dqk [2.790360614s]
Jan 20 12:22:15.269: INFO: Created: latency-svc-9gm5d
Jan 20 12:22:15.282: INFO: Got endpoints: latency-svc-9gm5d [2.617557896s]
Jan 20 12:22:15.807: INFO: Created: latency-svc-5gw7x
Jan 20 12:22:15.844: INFO: Got endpoints: latency-svc-5gw7x [3.165977765s]
Jan 20 12:22:16.066: INFO: Created: latency-svc-dkzb5
Jan 20 12:22:16.079: INFO: Got endpoints: latency-svc-dkzb5 [3.198077337s]
Jan 20 12:22:16.205: INFO: Created: latency-svc-zwjbr
Jan 20 12:22:16.211: INFO: Got endpoints: latency-svc-zwjbr [3.082159239s]
Jan 20 12:22:16.272: INFO: Created: latency-svc-rt4f7
Jan 20 12:22:16.281: INFO: Got endpoints: latency-svc-rt4f7 [3.05768823s]
Jan 20 12:22:16.389: INFO: Created: latency-svc-8c4fb
Jan 20 12:22:16.406: INFO: Got endpoints: latency-svc-8c4fb [2.874618712s]
Jan 20 12:22:16.646: INFO: Created: latency-svc-vg45d
Jan 20 12:22:16.671: INFO: Got endpoints: latency-svc-vg45d [2.96108925s]
Jan 20 12:22:16.825: INFO: Created: latency-svc-9gzkf
Jan 20 12:22:16.834: INFO: Got endpoints: latency-svc-9gzkf [2.826245649s]
Jan 20 12:22:16.892: INFO: Created: latency-svc-cmz6g
Jan 20 12:22:17.037: INFO: Got endpoints: latency-svc-cmz6g [2.804678667s]
Jan 20 12:22:17.090: INFO: Created: latency-svc-sdbhc
Jan 20 12:22:17.231: INFO: Got endpoints: latency-svc-sdbhc [2.783164463s]
Jan 20 12:22:17.259: INFO: Created: latency-svc-x2w4r
Jan 20 12:22:17.294: INFO: Got endpoints: latency-svc-x2w4r [2.630634474s]
Jan 20 12:22:17.531: INFO: Created: latency-svc-djdgx
Jan 20 12:22:17.537: INFO: Got endpoints: latency-svc-djdgx [2.733519606s]
Jan 20 12:22:17.721: INFO: Created: latency-svc-8gllr
Jan 20 12:22:17.722: INFO: Got endpoints: latency-svc-8gllr [2.741475077s]
Jan 20 12:22:17.787: INFO: Created: latency-svc-t2gdc
Jan 20 12:22:17.876: INFO: Got endpoints: latency-svc-t2gdc [2.872336416s]
Jan 20 12:22:17.926: INFO: Created: latency-svc-5fdvc
Jan 20 12:22:17.973: INFO: Got endpoints: latency-svc-5fdvc [2.724485149s]
Jan 20 12:22:18.097: INFO: Created: latency-svc-scqmx
Jan 20 12:22:18.108: INFO: Got endpoints: latency-svc-scqmx [2.826360412s]
Jan 20 12:22:18.170: INFO: Created: latency-svc-dk5hw
Jan 20 12:22:18.184: INFO: Got endpoints: latency-svc-dk5hw [2.340354201s]
Jan 20 12:22:18.287: INFO: Created: latency-svc-f2t74
Jan 20 12:22:18.304: INFO: Got endpoints: latency-svc-f2t74 [2.225361276s]
Jan 20 12:22:18.433: INFO: Created: latency-svc-8jrkj
Jan 20 12:22:18.461: INFO: Got endpoints: latency-svc-8jrkj [2.249681063s]
Jan 20 12:22:18.619: INFO: Created: latency-svc-tvh2v
Jan 20 12:22:18.635: INFO: Got endpoints: latency-svc-tvh2v [2.353423076s]
Jan 20 12:22:18.667: INFO: Created: latency-svc-prz77
Jan 20 12:22:18.679: INFO: Got endpoints: latency-svc-prz77 [2.27289624s]
Jan 20 12:22:18.827: INFO: Created: latency-svc-bqtql
Jan 20 12:22:18.873: INFO: Got endpoints: latency-svc-bqtql [2.202184548s]
Jan 20 12:22:18.879: INFO: Created: latency-svc-r6mct
Jan 20 12:22:18.968: INFO: Got endpoints: latency-svc-r6mct [2.134307347s]
Jan 20 12:22:19.001: INFO: Created: latency-svc-zc74r
Jan 20 12:22:19.009: INFO: Got endpoints: latency-svc-zc74r [1.972441978s]
Jan 20 12:22:19.122: INFO: Created: latency-svc-nrd99
Jan 20 12:22:19.157: INFO: Got endpoints: latency-svc-nrd99 [1.925437755s]
Jan 20 12:22:19.292: INFO: Created: latency-svc-9gssv
Jan 20 12:22:19.301: INFO: Got endpoints: latency-svc-9gssv [2.00723607s]
Jan 20 12:22:19.481: INFO: Created: latency-svc-7csd4
Jan 20 12:22:19.492: INFO: Got endpoints: latency-svc-7csd4 [1.954994943s]
Jan 20 12:22:19.864: INFO: Created: latency-svc-266rz
Jan 20 12:22:19.915: INFO: Got endpoints: latency-svc-266rz [2.193247773s]
Jan 20 12:22:20.224: INFO: Created: latency-svc-9tfbq
Jan 20 12:22:20.241: INFO: Got endpoints: latency-svc-9tfbq [2.364894546s]
Jan 20 12:22:20.319: INFO: Created: latency-svc-4tzr2
Jan 20 12:22:20.387: INFO: Got endpoints: latency-svc-4tzr2 [2.413343487s]
Jan 20 12:22:20.397: INFO: Created: latency-svc-wgcm9
Jan 20 12:22:20.413: INFO: Got endpoints: latency-svc-wgcm9 [2.30449586s]
Jan 20 12:22:20.490: INFO: Created: latency-svc-6khqh
Jan 20 12:22:20.503: INFO: Got endpoints: latency-svc-6khqh [2.318122422s]
Jan 20 12:22:20.648: INFO: Created: latency-svc-z2hvt
Jan 20 12:22:20.659: INFO: Got endpoints: latency-svc-z2hvt [2.354662144s]
Jan 20 12:22:20.764: INFO: Created: latency-svc-852p6
Jan 20 12:22:20.775: INFO: Got endpoints: latency-svc-852p6 [2.313989644s]
Jan 20 12:22:20.889: INFO: Created: latency-svc-wpfdh
Jan 20 12:22:20.968: INFO: Got endpoints: latency-svc-wpfdh [2.332840309s]
Jan 20 12:22:20.982: INFO: Created: latency-svc-rk2n6
Jan 20 12:22:20.997: INFO: Got endpoints: latency-svc-rk2n6 [2.317762837s]
Jan 20 12:22:21.162: INFO: Created: latency-svc-56vdp
Jan 20 12:22:21.176: INFO: Got endpoints: latency-svc-56vdp [2.302468163s]
Jan 20 12:22:21.234: INFO: Created: latency-svc-255h5
Jan 20 12:22:21.324: INFO: Got endpoints: latency-svc-255h5 [2.355531831s]
Jan 20 12:22:21.417: INFO: Created: latency-svc-k2469
Jan 20 12:22:21.707: INFO: Got endpoints: latency-svc-k2469 [2.697610938s]
Jan 20 12:22:21.730: INFO: Created: latency-svc-x5lp9
Jan 20 12:22:21.775: INFO: Got endpoints: latency-svc-x5lp9 [2.618646842s]
Jan 20 12:22:22.033: INFO: Created: latency-svc-9shp6
Jan 20 12:22:22.039: INFO: Got endpoints: latency-svc-9shp6 [2.737470914s]
Jan 20 12:22:22.225: INFO: Created: latency-svc-4zz9h
Jan 20 12:22:22.234: INFO: Got endpoints: latency-svc-4zz9h [2.742141619s]
Jan 20 12:22:22.403: INFO: Created: latency-svc-n24vl
Jan 20 12:22:22.436: INFO: Got endpoints: latency-svc-n24vl [2.521047923s]
Jan 20 12:22:22.684: INFO: Created: latency-svc-lvr8h
Jan 20 12:22:22.833: INFO: Got endpoints: latency-svc-lvr8h [2.59130189s]
Jan 20 12:22:22.933: INFO: Created: latency-svc-rfdkb
Jan 20 12:22:23.119: INFO: Got endpoints: latency-svc-rfdkb [2.731713025s]
Jan 20 12:22:23.299: INFO: Created: latency-svc-mdnmm
Jan 20 12:22:23.342: INFO: Got endpoints: latency-svc-mdnmm [2.928798757s]
Jan 20 12:22:23.572: INFO: Created: latency-svc-6jps2
Jan 20 12:22:23.589: INFO: Got endpoints: latency-svc-6jps2 [3.086565088s]
Jan 20 12:22:23.784: INFO: Created: latency-svc-bb92c
Jan 20 12:22:23.968: INFO: Created: latency-svc-72smg
Jan 20 12:22:23.970: INFO: Got endpoints: latency-svc-bb92c [3.310854276s]
Jan 20 12:22:24.015: INFO: Got endpoints: latency-svc-72smg [3.239619946s]
Jan 20 12:22:24.188: INFO: Created: latency-svc-t759q
Jan 20 12:22:24.216: INFO: Got endpoints: latency-svc-t759q [3.24786745s]
Jan 20 12:22:24.216: INFO: Latencies: [161.134653ms 251.580349ms 279.442148ms 521.712188ms 590.267429ms 734.32973ms 995.596537ms 1.406484671s 1.43051422s 1.657406309s 1.77340951s 1.925437755s 1.954994943s 1.972441978s 1.978077641s 2.00723607s 2.019365964s 2.129752793s 2.134307347s 2.175924729s 2.193247773s 2.201085288s 2.202184548s 2.225361276s 2.231525122s 2.249681063s 2.27289624s 2.293082467s 2.295778061s 2.296355052s 2.302468163s 2.30449586s 2.313989644s 2.317762837s 2.318122422s 2.328922766s 2.332840309s 2.334356031s 2.340354201s 2.353423076s 2.354662144s 2.355531831s 2.364894546s 2.369309809s 2.369563388s 2.380360428s 2.398995736s 2.409937587s 2.413214059s 2.413343487s 2.429278155s 2.433993058s 2.43614091s 2.441466381s 2.44444783s 2.444819426s 2.450752174s 2.457821585s 2.460623448s 2.469116697s 2.471768791s 2.476051975s 2.48512638s 2.493209423s 2.499490016s 2.514381349s 2.515266976s 2.521047923s 2.521061364s 2.521178784s 2.524307008s 2.524851894s 2.525388243s 2.533841752s 2.534418189s 2.543278154s 2.550309938s 2.551130684s 2.551363087s 2.56835142s 2.583398627s 2.586085683s 2.59130189s 2.591837057s 2.598349585s 2.608131202s 2.615893057s 2.617557896s 2.617984281s 2.618646842s 2.626785484s 2.630634474s 2.632091265s 2.6409967s 2.642478137s 2.656210765s 2.668137402s 2.676810751s 2.680029053s 2.68003955s 2.687963252s 2.691727708s 2.697610938s 2.699661565s 2.700166284s 2.700835147s 2.724485149s 2.727903016s 2.731713025s 2.732986076s 2.733519606s 2.737470914s 2.741214024s 2.741475077s 2.742141619s 2.743032191s 2.7448329s 2.750115098s 2.756323466s 2.759114344s 2.761169896s 2.767335976s 2.768248773s 2.769686763s 2.771862019s 2.776673699s 2.778511524s 2.782890648s 2.783164463s 2.786374162s 2.790360614s 2.790731907s 2.804678667s 2.809610049s 2.810722127s 2.81135141s 2.811520804s 2.817611777s 2.826245649s 2.826360412s 2.827816443s 2.832964646s 2.83427531s 2.845248731s 2.851642351s 2.869627601s 2.871045603s 2.872336416s 2.874618712s 2.883949254s 2.889308467s 2.891173048s 2.89817541s 2.904711967s 2.904963952s 2.90968873s 2.923229967s 2.928798757s 2.934357947s 2.938038953s 2.941165891s 2.9424951s 2.948912387s 2.96108925s 2.964084621s 2.967252202s 2.969066235s 2.987386476s 2.998150011s 3.006647679s 3.021930095s 3.028379551s 3.049264142s 3.053579147s 3.05768823s 3.058550487s 3.070954006s 3.077856549s 3.082159239s 3.086565088s 3.108495201s 3.110133326s 3.112025309s 3.125412431s 3.141327806s 3.149281386s 3.150005995s 3.165977765s 3.167794002s 3.180870929s 3.198077337s 3.198250935s 3.206819558s 3.21136667s 3.211989193s 3.239619946s 3.24786745s 3.266022272s 3.28049636s 3.310854276s]
Jan 20 12:22:24.216: INFO: 50 %ile: 2.687963252s
Jan 20 12:22:24.216: INFO: 90 %ile: 3.108495201s
Jan 20 12:22:24.216: INFO: 99 %ile: 3.28049636s
Jan 20 12:22:24.216: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:22:24.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svc-latency-l9hh2" for this suite.
Jan 20 12:23:18.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:23:18.320: INFO: namespace: e2e-tests-svc-latency-l9hh2, resource: bindings, ignored listing per whitelist
Jan 20 12:23:18.453: INFO: namespace e2e-tests-svc-latency-l9hh2 deletion completed in 54.21836885s

• [SLOW TEST:99.997 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:23:18.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-vjh6
STEP: Creating a pod to test atomic-volume-subpath
Jan 20 12:23:18.794: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-vjh6" in namespace "e2e-tests-subpath-749mx" to be "success or failure"
Jan 20 12:23:18.806: INFO: Pod "pod-subpath-test-downwardapi-vjh6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.913572ms
Jan 20 12:23:20.833: INFO: Pod "pod-subpath-test-downwardapi-vjh6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039747605s
Jan 20 12:23:22.868: INFO: Pod "pod-subpath-test-downwardapi-vjh6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073831548s
Jan 20 12:23:24.885: INFO: Pod "pod-subpath-test-downwardapi-vjh6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09111367s
Jan 20 12:23:26.905: INFO: Pod "pod-subpath-test-downwardapi-vjh6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111677493s
Jan 20 12:23:29.002: INFO: Pod "pod-subpath-test-downwardapi-vjh6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.207870501s
Jan 20 12:23:31.034: INFO: Pod "pod-subpath-test-downwardapi-vjh6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.240242566s
Jan 20 12:23:33.063: INFO: Pod "pod-subpath-test-downwardapi-vjh6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.268830448s
Jan 20 12:23:35.075: INFO: Pod "pod-subpath-test-downwardapi-vjh6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.281517945s
Jan 20 12:23:37.111: INFO: Pod "pod-subpath-test-downwardapi-vjh6": Phase="Running", Reason="", readiness=false. Elapsed: 18.317340183s
Jan 20 12:23:39.127: INFO: Pod "pod-subpath-test-downwardapi-vjh6": Phase="Running", Reason="", readiness=false. Elapsed: 20.333593103s
Jan 20 12:23:41.143: INFO: Pod "pod-subpath-test-downwardapi-vjh6": Phase="Running", Reason="", readiness=false. Elapsed: 22.349419594s
Jan 20 12:23:43.154: INFO: Pod "pod-subpath-test-downwardapi-vjh6": Phase="Running", Reason="", readiness=false. Elapsed: 24.360246559s
Jan 20 12:23:45.173: INFO: Pod "pod-subpath-test-downwardapi-vjh6": Phase="Running", Reason="", readiness=false. Elapsed: 26.37967699s
Jan 20 12:23:47.189: INFO: Pod "pod-subpath-test-downwardapi-vjh6": Phase="Running", Reason="", readiness=false. Elapsed: 28.395711606s
Jan 20 12:23:49.207: INFO: Pod "pod-subpath-test-downwardapi-vjh6": Phase="Running", Reason="", readiness=false. Elapsed: 30.41334307s
Jan 20 12:23:51.716: INFO: Pod "pod-subpath-test-downwardapi-vjh6": Phase="Running", Reason="", readiness=false. Elapsed: 32.922191155s
Jan 20 12:23:53.730: INFO: Pod "pod-subpath-test-downwardapi-vjh6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.936502223s
STEP: Saw pod success
Jan 20 12:23:53.730: INFO: Pod "pod-subpath-test-downwardapi-vjh6" satisfied condition "success or failure"
Jan 20 12:23:53.747: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-downwardapi-vjh6 container test-container-subpath-downwardapi-vjh6: 
STEP: delete the pod
Jan 20 12:23:54.259: INFO: Waiting for pod pod-subpath-test-downwardapi-vjh6 to disappear
Jan 20 12:23:54.273: INFO: Pod pod-subpath-test-downwardapi-vjh6 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-vjh6
Jan 20 12:23:54.273: INFO: Deleting pod "pod-subpath-test-downwardapi-vjh6" in namespace "e2e-tests-subpath-749mx"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:23:54.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-749mx" for this suite.
Jan 20 12:24:00.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:24:00.809: INFO: namespace: e2e-tests-subpath-749mx, resource: bindings, ignored listing per whitelist
Jan 20 12:24:00.932: INFO: namespace e2e-tests-subpath-749mx deletion completed in 6.646733407s

• [SLOW TEST:42.479 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:24:00.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating secret e2e-tests-secrets-ttn62/secret-test-bd50ec55-3b7f-11ea-8bde-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 20 12:24:01.173: INFO: Waiting up to 5m0s for pod "pod-configmaps-bd525db2-3b7f-11ea-8bde-0242ac110005" in namespace "e2e-tests-secrets-ttn62" to be "success or failure"
Jan 20 12:24:01.210: INFO: Pod "pod-configmaps-bd525db2-3b7f-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 36.569695ms
Jan 20 12:24:03.225: INFO: Pod "pod-configmaps-bd525db2-3b7f-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051622274s
Jan 20 12:24:05.246: INFO: Pod "pod-configmaps-bd525db2-3b7f-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072479577s
Jan 20 12:24:07.262: INFO: Pod "pod-configmaps-bd525db2-3b7f-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088372841s
Jan 20 12:24:09.272: INFO: Pod "pod-configmaps-bd525db2-3b7f-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098622883s
Jan 20 12:24:11.613: INFO: Pod "pod-configmaps-bd525db2-3b7f-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.439242447s
STEP: Saw pod success
Jan 20 12:24:11.613: INFO: Pod "pod-configmaps-bd525db2-3b7f-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:24:11.622: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-bd525db2-3b7f-11ea-8bde-0242ac110005 container env-test: 
STEP: delete the pod
Jan 20 12:24:12.005: INFO: Waiting for pod pod-configmaps-bd525db2-3b7f-11ea-8bde-0242ac110005 to disappear
Jan 20 12:24:12.018: INFO: Pod pod-configmaps-bd525db2-3b7f-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:24:12.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-ttn62" for this suite.
Jan 20 12:24:18.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:24:18.230: INFO: namespace: e2e-tests-secrets-ttn62, resource: bindings, ignored listing per whitelist
Jan 20 12:24:18.274: INFO: namespace e2e-tests-secrets-ttn62 deletion completed in 6.244501422s

• [SLOW TEST:17.342 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:24:18.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 20 12:24:18.607: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c7ace046-3b7f-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-b4n5h" to be "success or failure"
Jan 20 12:24:18.623: INFO: Pod "downwardapi-volume-c7ace046-3b7f-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.563382ms
Jan 20 12:24:20.664: INFO: Pod "downwardapi-volume-c7ace046-3b7f-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056485289s
Jan 20 12:24:22.728: INFO: Pod "downwardapi-volume-c7ace046-3b7f-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120584417s
Jan 20 12:24:25.160: INFO: Pod "downwardapi-volume-c7ace046-3b7f-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.552878122s
Jan 20 12:24:27.215: INFO: Pod "downwardapi-volume-c7ace046-3b7f-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.607575378s
Jan 20 12:24:29.231: INFO: Pod "downwardapi-volume-c7ace046-3b7f-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.623948692s
STEP: Saw pod success
Jan 20 12:24:29.232: INFO: Pod "downwardapi-volume-c7ace046-3b7f-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:24:29.237: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-c7ace046-3b7f-11ea-8bde-0242ac110005 container client-container: 
STEP: delete the pod
Jan 20 12:24:29.992: INFO: Waiting for pod downwardapi-volume-c7ace046-3b7f-11ea-8bde-0242ac110005 to disappear
Jan 20 12:24:30.008: INFO: Pod downwardapi-volume-c7ace046-3b7f-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:24:30.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-b4n5h" for this suite.
Jan 20 12:24:36.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:24:36.194: INFO: namespace: e2e-tests-projected-b4n5h, resource: bindings, ignored listing per whitelist
Jan 20 12:24:36.224: INFO: namespace e2e-tests-projected-b4n5h deletion completed in 6.200209159s

• [SLOW TEST:17.948 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:24:36.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:24:36.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-dnmcz" for this suite.
Jan 20 12:25:00.664: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:25:00.798: INFO: namespace: e2e-tests-pods-dnmcz, resource: bindings, ignored listing per whitelist
Jan 20 12:25:00.811: INFO: namespace e2e-tests-pods-dnmcz deletion completed in 24.289351014s

• [SLOW TEST:24.587 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:25:00.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 20 12:25:01.051: INFO: Number of nodes with available pods: 0
Jan 20 12:25:01.051: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:25:02.232: INFO: Number of nodes with available pods: 0
Jan 20 12:25:02.232: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:25:03.178: INFO: Number of nodes with available pods: 0
Jan 20 12:25:03.178: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:25:04.095: INFO: Number of nodes with available pods: 0
Jan 20 12:25:04.095: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:25:05.081: INFO: Number of nodes with available pods: 0
Jan 20 12:25:05.081: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:25:06.111: INFO: Number of nodes with available pods: 0
Jan 20 12:25:06.111: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:25:07.892: INFO: Number of nodes with available pods: 0
Jan 20 12:25:07.892: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:25:08.098: INFO: Number of nodes with available pods: 0
Jan 20 12:25:08.098: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:25:09.081: INFO: Number of nodes with available pods: 0
Jan 20 12:25:09.081: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:25:10.077: INFO: Number of nodes with available pods: 0
Jan 20 12:25:10.077: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:25:11.074: INFO: Number of nodes with available pods: 1
Jan 20 12:25:11.074: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 20 12:25:11.185: INFO: Number of nodes with available pods: 1
Jan 20 12:25:11.185: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-mknjn, will wait for the garbage collector to delete the pods
Jan 20 12:25:14.004: INFO: Deleting DaemonSet.extensions daemon-set took: 52.767048ms
Jan 20 12:25:14.205: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.510077ms
Jan 20 12:25:18.366: INFO: Number of nodes with available pods: 0
Jan 20 12:25:18.366: INFO: Number of running nodes: 0, number of available pods: 0
Jan 20 12:25:18.370: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-mknjn/daemonsets","resourceVersion":"18853069"},"items":null}

Jan 20 12:25:18.372: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-mknjn/pods","resourceVersion":"18853069"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:25:18.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-mknjn" for this suite.
Jan 20 12:25:26.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:25:26.547: INFO: namespace: e2e-tests-daemonsets-mknjn, resource: bindings, ignored listing per whitelist
Jan 20 12:25:26.614: INFO: namespace e2e-tests-daemonsets-mknjn deletion completed in 8.231421452s

• [SLOW TEST:25.802 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:25:26.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 20 12:25:26.894: INFO: Waiting up to 5m0s for pod "pod-f06ddccb-3b7f-11ea-8bde-0242ac110005" in namespace "e2e-tests-emptydir-xwbs8" to be "success or failure"
Jan 20 12:25:26.926: INFO: Pod "pod-f06ddccb-3b7f-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 31.732335ms
Jan 20 12:25:29.039: INFO: Pod "pod-f06ddccb-3b7f-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144516077s
Jan 20 12:25:31.062: INFO: Pod "pod-f06ddccb-3b7f-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167357579s
Jan 20 12:25:33.384: INFO: Pod "pod-f06ddccb-3b7f-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.489360617s
Jan 20 12:25:35.397: INFO: Pod "pod-f06ddccb-3b7f-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.503190988s
Jan 20 12:25:37.480: INFO: Pod "pod-f06ddccb-3b7f-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.585474921s
STEP: Saw pod success
Jan 20 12:25:37.480: INFO: Pod "pod-f06ddccb-3b7f-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:25:37.489: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-f06ddccb-3b7f-11ea-8bde-0242ac110005 container test-container: 
STEP: delete the pod
Jan 20 12:25:37.820: INFO: Waiting for pod pod-f06ddccb-3b7f-11ea-8bde-0242ac110005 to disappear
Jan 20 12:25:37.835: INFO: Pod pod-f06ddccb-3b7f-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:25:37.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-xwbs8" for this suite.
Jan 20 12:25:44.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:25:44.261: INFO: namespace: e2e-tests-emptydir-xwbs8, resource: bindings, ignored listing per whitelist
Jan 20 12:25:44.267: INFO: namespace e2e-tests-emptydir-xwbs8 deletion completed in 6.273297429s

• [SLOW TEST:17.652 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:25:44.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jan 20 12:25:52.545: INFO: Pod pod-hostip-fae95cb7-3b7f-11ea-8bde-0242ac110005 has hostIP: 10.96.1.240
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:25:52.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-fdsmm" for this suite.
Jan 20 12:26:16.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:26:16.808: INFO: namespace: e2e-tests-pods-fdsmm, resource: bindings, ignored listing per whitelist
Jan 20 12:26:16.900: INFO: namespace e2e-tests-pods-fdsmm deletion completed in 24.276974135s

• [SLOW TEST:32.633 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:26:16.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:26:17.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-2cxpd" for this suite.
Jan 20 12:26:23.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:26:23.355: INFO: namespace: e2e-tests-services-2cxpd, resource: bindings, ignored listing per whitelist
Jan 20 12:26:23.367: INFO: namespace e2e-tests-services-2cxpd deletion completed in 6.238798903s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.467 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:26:23.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-nctq8/configmap-test-12338189-3b80-11ea-8bde-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 20 12:26:23.563: INFO: Waiting up to 5m0s for pod "pod-configmaps-12347a3f-3b80-11ea-8bde-0242ac110005" in namespace "e2e-tests-configmap-nctq8" to be "success or failure"
Jan 20 12:26:23.569: INFO: Pod "pod-configmaps-12347a3f-3b80-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 5.817897ms
Jan 20 12:26:25.602: INFO: Pod "pod-configmaps-12347a3f-3b80-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039007565s
Jan 20 12:26:27.615: INFO: Pod "pod-configmaps-12347a3f-3b80-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052111198s
Jan 20 12:26:29.888: INFO: Pod "pod-configmaps-12347a3f-3b80-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.325531289s
Jan 20 12:26:32.206: INFO: Pod "pod-configmaps-12347a3f-3b80-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.643048434s
Jan 20 12:26:34.258: INFO: Pod "pod-configmaps-12347a3f-3b80-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.69578482s
STEP: Saw pod success
Jan 20 12:26:34.259: INFO: Pod "pod-configmaps-12347a3f-3b80-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:26:34.277: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-configmaps-12347a3f-3b80-11ea-8bde-0242ac110005 container env-test: 
STEP: delete the pod
Jan 20 12:26:34.423: INFO: Waiting for pod pod-configmaps-12347a3f-3b80-11ea-8bde-0242ac110005 to disappear
Jan 20 12:26:34.435: INFO: Pod pod-configmaps-12347a3f-3b80-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:26:34.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-nctq8" for this suite.
Jan 20 12:26:40.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:26:40.686: INFO: namespace: e2e-tests-configmap-nctq8, resource: bindings, ignored listing per whitelist
Jan 20 12:26:40.783: INFO: namespace e2e-tests-configmap-nctq8 deletion completed in 6.338664795s

• [SLOW TEST:17.415 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:26:40.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-1c996974-3b80-11ea-8bde-0242ac110005
Jan 20 12:26:41.028: INFO: Pod name my-hostname-basic-1c996974-3b80-11ea-8bde-0242ac110005: Found 0 pods out of 1
Jan 20 12:26:46.169: INFO: Pod name my-hostname-basic-1c996974-3b80-11ea-8bde-0242ac110005: Found 1 pods out of 1
Jan 20 12:26:46.169: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1c996974-3b80-11ea-8bde-0242ac110005" are running
Jan 20 12:26:50.203: INFO: Pod "my-hostname-basic-1c996974-3b80-11ea-8bde-0242ac110005-sf9pn" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 12:26:41 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 12:26:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1c996974-3b80-11ea-8bde-0242ac110005]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 12:26:41 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1c996974-3b80-11ea-8bde-0242ac110005]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-20 12:26:41 +0000 UTC Reason: Message:}])
Jan 20 12:26:50.203: INFO: Trying to dial the pod
Jan 20 12:26:55.248: INFO: Controller my-hostname-basic-1c996974-3b80-11ea-8bde-0242ac110005: Got expected result from replica 1 [my-hostname-basic-1c996974-3b80-11ea-8bde-0242ac110005-sf9pn]: "my-hostname-basic-1c996974-3b80-11ea-8bde-0242ac110005-sf9pn", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:26:55.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-624v2" for this suite.
Jan 20 12:27:01.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:27:01.360: INFO: namespace: e2e-tests-replication-controller-624v2, resource: bindings, ignored listing per whitelist
Jan 20 12:27:01.488: INFO: namespace e2e-tests-replication-controller-624v2 deletion completed in 6.230670162s

• [SLOW TEST:20.705 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:27:01.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-28e5c6fe-3b80-11ea-8bde-0242ac110005
STEP: Creating a pod to test consume secrets
Jan 20 12:27:01.702: INFO: Waiting up to 5m0s for pod "pod-secrets-28efbd4f-3b80-11ea-8bde-0242ac110005" in namespace "e2e-tests-secrets-8kfkk" to be "success or failure"
Jan 20 12:27:01.738: INFO: Pod "pod-secrets-28efbd4f-3b80-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 35.66747ms
Jan 20 12:27:03.755: INFO: Pod "pod-secrets-28efbd4f-3b80-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053082833s
Jan 20 12:27:05.769: INFO: Pod "pod-secrets-28efbd4f-3b80-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066869186s
Jan 20 12:27:07.865: INFO: Pod "pod-secrets-28efbd4f-3b80-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.162749167s
Jan 20 12:27:09.890: INFO: Pod "pod-secrets-28efbd4f-3b80-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.187794451s
Jan 20 12:27:11.914: INFO: Pod "pod-secrets-28efbd4f-3b80-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.212294368s
STEP: Saw pod success
Jan 20 12:27:11.915: INFO: Pod "pod-secrets-28efbd4f-3b80-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:27:11.926: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-secrets-28efbd4f-3b80-11ea-8bde-0242ac110005 container secret-env-test: 
STEP: delete the pod
Jan 20 12:27:12.114: INFO: Waiting for pod pod-secrets-28efbd4f-3b80-11ea-8bde-0242ac110005 to disappear
Jan 20 12:27:12.135: INFO: Pod pod-secrets-28efbd4f-3b80-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:27:12.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-8kfkk" for this suite.
Jan 20 12:27:18.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:27:18.337: INFO: namespace: e2e-tests-secrets-8kfkk, resource: bindings, ignored listing per whitelist
Jan 20 12:27:18.377: INFO: namespace e2e-tests-secrets-8kfkk deletion completed in 6.222235593s

• [SLOW TEST:16.888 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:27:18.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 20 12:27:18.811: INFO: Waiting up to 5m0s for pod "pod-331d5da3-3b80-11ea-8bde-0242ac110005" in namespace "e2e-tests-emptydir-r7qp2" to be "success or failure"
Jan 20 12:27:18.825: INFO: Pod "pod-331d5da3-3b80-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.853318ms
Jan 20 12:27:20.852: INFO: Pod "pod-331d5da3-3b80-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041106119s
Jan 20 12:27:22.868: INFO: Pod "pod-331d5da3-3b80-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056569546s
Jan 20 12:27:24.885: INFO: Pod "pod-331d5da3-3b80-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073977651s
Jan 20 12:27:26.901: INFO: Pod "pod-331d5da3-3b80-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090353677s
Jan 20 12:27:28.960: INFO: Pod "pod-331d5da3-3b80-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.149301745s
STEP: Saw pod success
Jan 20 12:27:28.960: INFO: Pod "pod-331d5da3-3b80-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:27:28.975: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-331d5da3-3b80-11ea-8bde-0242ac110005 container test-container: 
STEP: delete the pod
Jan 20 12:27:29.191: INFO: Waiting for pod pod-331d5da3-3b80-11ea-8bde-0242ac110005 to disappear
Jan 20 12:27:29.240: INFO: Pod pod-331d5da3-3b80-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:27:29.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-r7qp2" for this suite.
Jan 20 12:27:35.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:27:35.563: INFO: namespace: e2e-tests-emptydir-r7qp2, resource: bindings, ignored listing per whitelist
Jan 20 12:27:35.610: INFO: namespace e2e-tests-emptydir-r7qp2 deletion completed in 6.339344053s

• [SLOW TEST:17.233 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:27:35.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-fc6pj
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 20 12:27:35.845: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 20 12:28:14.208: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-fc6pj PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 12:28:14.209: INFO: >>> kubeConfig: /root/.kube/config
I0120 12:28:14.271209       8 log.go:172] (0xc0020502c0) (0xc000be5680) Create stream
I0120 12:28:14.271271       8 log.go:172] (0xc0020502c0) (0xc000be5680) Stream added, broadcasting: 1
I0120 12:28:14.276372       8 log.go:172] (0xc0020502c0) Reply frame received for 1
I0120 12:28:14.276408       8 log.go:172] (0xc0020502c0) (0xc00113f0e0) Create stream
I0120 12:28:14.276415       8 log.go:172] (0xc0020502c0) (0xc00113f0e0) Stream added, broadcasting: 3
I0120 12:28:14.277242       8 log.go:172] (0xc0020502c0) Reply frame received for 3
I0120 12:28:14.277258       8 log.go:172] (0xc0020502c0) (0xc000be5720) Create stream
I0120 12:28:14.277266       8 log.go:172] (0xc0020502c0) (0xc000be5720) Stream added, broadcasting: 5
I0120 12:28:14.278152       8 log.go:172] (0xc0020502c0) Reply frame received for 5
I0120 12:28:15.417519       8 log.go:172] (0xc0020502c0) Data frame received for 3
I0120 12:28:15.417613       8 log.go:172] (0xc00113f0e0) (3) Data frame handling
I0120 12:28:15.417696       8 log.go:172] (0xc00113f0e0) (3) Data frame sent
I0120 12:28:15.650648       8 log.go:172] (0xc0020502c0) (0xc00113f0e0) Stream removed, broadcasting: 3
I0120 12:28:15.650816       8 log.go:172] (0xc0020502c0) Data frame received for 1
I0120 12:28:15.650874       8 log.go:172] (0xc000be5680) (1) Data frame handling
I0120 12:28:15.650931       8 log.go:172] (0xc000be5680) (1) Data frame sent
I0120 12:28:15.651003       8 log.go:172] (0xc0020502c0) (0xc000be5720) Stream removed, broadcasting: 5
I0120 12:28:15.651081       8 log.go:172] (0xc0020502c0) (0xc000be5680) Stream removed, broadcasting: 1
I0120 12:28:15.651142       8 log.go:172] (0xc0020502c0) Go away received
I0120 12:28:15.652470       8 log.go:172] (0xc0020502c0) (0xc000be5680) Stream removed, broadcasting: 1
I0120 12:28:15.652622       8 log.go:172] (0xc0020502c0) (0xc00113f0e0) Stream removed, broadcasting: 3
I0120 12:28:15.652647       8 log.go:172] (0xc0020502c0) (0xc000be5720) Stream removed, broadcasting: 5
Jan 20 12:28:15.652: INFO: Found all expected endpoints: [netserver-0]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:28:15.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-fc6pj" for this suite.
Jan 20 12:28:41.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:28:42.015: INFO: namespace: e2e-tests-pod-network-test-fc6pj, resource: bindings, ignored listing per whitelist
Jan 20 12:28:42.104: INFO: namespace e2e-tests-pod-network-test-fc6pj deletion completed in 26.398133925s

• [SLOW TEST:66.493 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:28:42.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:28:52.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-nd9g9" for this suite.
Jan 20 12:29:34.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:29:34.579: INFO: namespace: e2e-tests-kubelet-test-nd9g9, resource: bindings, ignored listing per whitelist
Jan 20 12:29:34.627: INFO: namespace e2e-tests-kubelet-test-nd9g9 deletion completed in 42.255697291s

• [SLOW TEST:52.523 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:29:34.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:30:31.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-kvw85" for this suite.
Jan 20 12:30:37.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:30:37.639: INFO: namespace: e2e-tests-container-runtime-kvw85, resource: bindings, ignored listing per whitelist
Jan 20 12:30:37.719: INFO: namespace e2e-tests-container-runtime-kvw85 deletion completed in 6.431412662s

• [SLOW TEST:63.092 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:30:37.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jan 20 12:30:37.918: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:30:38.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zqrlr" for this suite.
Jan 20 12:30:44.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:30:44.165: INFO: namespace: e2e-tests-kubectl-zqrlr, resource: bindings, ignored listing per whitelist
Jan 20 12:30:44.180: INFO: namespace e2e-tests-kubectl-zqrlr deletion completed in 6.138118612s

• [SLOW TEST:6.461 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:30:44.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0120 12:30:46.562362       8 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 20 12:30:46.562: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:30:46.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-z98sb" for this suite.
Jan 20 12:30:54.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:30:54.930: INFO: namespace: e2e-tests-gc-z98sb, resource: bindings, ignored listing per whitelist
Jan 20 12:30:54.968: INFO: namespace e2e-tests-gc-z98sb deletion completed in 8.394964566s

• [SLOW TEST:10.788 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:30:54.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 20 12:30:55.156: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b40ff36a-3b80-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-bx496" to be "success or failure"
Jan 20 12:30:55.169: INFO: Pod "downwardapi-volume-b40ff36a-3b80-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 12.531121ms
Jan 20 12:30:57.355: INFO: Pod "downwardapi-volume-b40ff36a-3b80-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198944607s
Jan 20 12:30:59.388: INFO: Pod "downwardapi-volume-b40ff36a-3b80-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.231675804s
Jan 20 12:31:01.459: INFO: Pod "downwardapi-volume-b40ff36a-3b80-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.302479499s
Jan 20 12:31:04.010: INFO: Pod "downwardapi-volume-b40ff36a-3b80-11ea-8bde-0242ac110005": Phase="Running", Reason="", readiness=true. Elapsed: 8.853368578s
Jan 20 12:31:06.040: INFO: Pod "downwardapi-volume-b40ff36a-3b80-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.883424368s
STEP: Saw pod success
Jan 20 12:31:06.040: INFO: Pod "downwardapi-volume-b40ff36a-3b80-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:31:06.070: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-b40ff36a-3b80-11ea-8bde-0242ac110005 container client-container: 
STEP: delete the pod
Jan 20 12:31:06.591: INFO: Waiting for pod downwardapi-volume-b40ff36a-3b80-11ea-8bde-0242ac110005 to disappear
Jan 20 12:31:06.666: INFO: Pod downwardapi-volume-b40ff36a-3b80-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:31:06.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bx496" for this suite.
Jan 20 12:31:12.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:31:12.769: INFO: namespace: e2e-tests-projected-bx496, resource: bindings, ignored listing per whitelist
Jan 20 12:31:12.914: INFO: namespace e2e-tests-projected-bx496 deletion completed in 6.231969205s

• [SLOW TEST:17.946 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:31:12.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jan 20 12:31:13.080: INFO: PodSpec: initContainers in spec.initContainers
Jan 20 12:32:20.809: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-bec757ff-3b80-11ea-8bde-0242ac110005", GenerateName:"", Namespace:"e2e-tests-init-container-gkshp", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-gkshp/pods/pod-init-bec757ff-3b80-11ea-8bde-0242ac110005", UID:"bece0e16-3b80-11ea-a994-fa163e34d433", ResourceVersion:"18853994", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715120273, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"80947824"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-26dlt", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002174680), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-26dlt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-26dlt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-26dlt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0027d0498), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001f92b40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0027d0520)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0027d0540)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0027d0548), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0027d054c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715120273, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715120273, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715120273, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715120273, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc0015c2bc0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002797880)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://815aa19e1ce9b634dd1d8eaa4d51faf3ba63a3f92e91e6d720fef9522332ff30"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0015c2c80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0015c2be0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:32:20.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-gkshp" for this suite.
Jan 20 12:32:45.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:32:45.214: INFO: namespace: e2e-tests-init-container-gkshp, resource: bindings, ignored listing per whitelist
Jan 20 12:32:45.224: INFO: namespace e2e-tests-init-container-gkshp deletion completed in 24.216104414s

• [SLOW TEST:92.310 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:32:45.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 20 12:32:45.432: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 20 12:32:45.471: INFO: Number of nodes with available pods: 0
Jan 20 12:32:45.471: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:32:46.508: INFO: Number of nodes with available pods: 0
Jan 20 12:32:46.508: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:32:47.699: INFO: Number of nodes with available pods: 0
Jan 20 12:32:47.699: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:32:48.507: INFO: Number of nodes with available pods: 0
Jan 20 12:32:48.507: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:32:49.503: INFO: Number of nodes with available pods: 0
Jan 20 12:32:49.503: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:32:51.069: INFO: Number of nodes with available pods: 0
Jan 20 12:32:51.069: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:32:51.583: INFO: Number of nodes with available pods: 0
Jan 20 12:32:51.583: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:32:52.513: INFO: Number of nodes with available pods: 0
Jan 20 12:32:52.513: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:32:53.494: INFO: Number of nodes with available pods: 0
Jan 20 12:32:53.494: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:32:54.528: INFO: Number of nodes with available pods: 0
Jan 20 12:32:54.528: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:32:55.504: INFO: Number of nodes with available pods: 1
Jan 20 12:32:55.505: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 20 12:32:55.620: INFO: Wrong image for pod: daemon-set-644j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 12:32:56.646: INFO: Wrong image for pod: daemon-set-644j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 12:32:57.806: INFO: Wrong image for pod: daemon-set-644j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 12:32:58.655: INFO: Wrong image for pod: daemon-set-644j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 12:32:59.662: INFO: Wrong image for pod: daemon-set-644j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 12:33:00.684: INFO: Wrong image for pod: daemon-set-644j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 12:33:01.653: INFO: Wrong image for pod: daemon-set-644j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 12:33:01.653: INFO: Pod daemon-set-644j8 is not available
Jan 20 12:33:02.665: INFO: Wrong image for pod: daemon-set-644j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 12:33:02.665: INFO: Pod daemon-set-644j8 is not available
Jan 20 12:33:03.651: INFO: Wrong image for pod: daemon-set-644j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 12:33:03.651: INFO: Pod daemon-set-644j8 is not available
Jan 20 12:33:04.652: INFO: Wrong image for pod: daemon-set-644j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 12:33:04.652: INFO: Pod daemon-set-644j8 is not available
Jan 20 12:33:05.651: INFO: Wrong image for pod: daemon-set-644j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 12:33:05.651: INFO: Pod daemon-set-644j8 is not available
Jan 20 12:33:06.652: INFO: Wrong image for pod: daemon-set-644j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 12:33:06.652: INFO: Pod daemon-set-644j8 is not available
Jan 20 12:33:07.652: INFO: Wrong image for pod: daemon-set-644j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 12:33:07.652: INFO: Pod daemon-set-644j8 is not available
Jan 20 12:33:08.665: INFO: Wrong image for pod: daemon-set-644j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 12:33:08.665: INFO: Pod daemon-set-644j8 is not available
Jan 20 12:33:09.654: INFO: Wrong image for pod: daemon-set-644j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 12:33:09.654: INFO: Pod daemon-set-644j8 is not available
Jan 20 12:33:10.674: INFO: Wrong image for pod: daemon-set-644j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 12:33:10.674: INFO: Pod daemon-set-644j8 is not available
Jan 20 12:33:11.653: INFO: Wrong image for pod: daemon-set-644j8. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan 20 12:33:11.653: INFO: Pod daemon-set-644j8 is not available
Jan 20 12:33:12.758: INFO: Pod daemon-set-flp4r is not available
Jan 20 12:33:13.648: INFO: Pod daemon-set-flp4r is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 20 12:33:13.664: INFO: Number of nodes with available pods: 0
Jan 20 12:33:13.664: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:33:14.951: INFO: Number of nodes with available pods: 0
Jan 20 12:33:14.951: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:33:15.685: INFO: Number of nodes with available pods: 0
Jan 20 12:33:15.685: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:33:17.244: INFO: Number of nodes with available pods: 0
Jan 20 12:33:17.244: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:33:17.687: INFO: Number of nodes with available pods: 0
Jan 20 12:33:17.687: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:33:19.036: INFO: Number of nodes with available pods: 0
Jan 20 12:33:19.036: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:33:19.918: INFO: Number of nodes with available pods: 0
Jan 20 12:33:19.918: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:33:20.680: INFO: Number of nodes with available pods: 0
Jan 20 12:33:20.680: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:33:21.687: INFO: Number of nodes with available pods: 0
Jan 20 12:33:21.687: INFO: Node hunter-server-hu5at5svl7ps is running more than one daemon pod
Jan 20 12:33:22.692: INFO: Number of nodes with available pods: 1
Jan 20 12:33:22.692: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-prg6t, will wait for the garbage collector to delete the pods
Jan 20 12:33:22.910: INFO: Deleting DaemonSet.extensions daemon-set took: 20.767765ms
Jan 20 12:33:23.011: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.625182ms
Jan 20 12:33:32.685: INFO: Number of nodes with available pods: 0
Jan 20 12:33:32.685: INFO: Number of running nodes: 0, number of available pods: 0
Jan 20 12:33:32.702: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-prg6t/daemonsets","resourceVersion":"18854141"},"items":null}

Jan 20 12:33:32.713: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-prg6t/pods","resourceVersion":"18854141"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:33:32.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-prg6t" for this suite.
Jan 20 12:33:40.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:33:40.905: INFO: namespace: e2e-tests-daemonsets-prg6t, resource: bindings, ignored listing per whitelist
Jan 20 12:33:40.934: INFO: namespace e2e-tests-daemonsets-prg6t deletion completed in 8.183396025s

• [SLOW TEST:55.709 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:33:40.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jan 20 12:33:41.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 20 12:33:41.461: INFO: stderr: ""
Jan 20 12:33:41.461: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:33:41.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-254pf" for this suite.
Jan 20 12:33:47.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:33:47.631: INFO: namespace: e2e-tests-kubectl-254pf, resource: bindings, ignored listing per whitelist
Jan 20 12:33:47.677: INFO: namespace e2e-tests-kubectl-254pf deletion completed in 6.201628106s

• [SLOW TEST:6.743 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:33:47.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-hrvlq
Jan 20 12:33:58.082: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-hrvlq
STEP: checking the pod's current state and verifying that restartCount is present
Jan 20 12:33:58.086: INFO: Initial restart count of pod liveness-http is 0
Jan 20 12:34:24.351: INFO: Restart count of pod e2e-tests-container-probe-hrvlq/liveness-http is now 1 (26.26489268s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:34:24.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-hrvlq" for this suite.
Jan 20 12:34:30.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:34:30.699: INFO: namespace: e2e-tests-container-probe-hrvlq, resource: bindings, ignored listing per whitelist
Jan 20 12:34:30.704: INFO: namespace e2e-tests-container-probe-hrvlq deletion completed in 6.279028277s

• [SLOW TEST:43.026 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:34:30.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-34b08a62-3b81-11ea-8bde-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 20 12:34:30.935: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-34b2d097-3b81-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-42glg" to be "success or failure"
Jan 20 12:34:30.945: INFO: Pod "pod-projected-configmaps-34b2d097-3b81-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 9.570763ms
Jan 20 12:34:32.972: INFO: Pod "pod-projected-configmaps-34b2d097-3b81-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03616874s
Jan 20 12:34:35.011: INFO: Pod "pod-projected-configmaps-34b2d097-3b81-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075711386s
Jan 20 12:34:37.069: INFO: Pod "pod-projected-configmaps-34b2d097-3b81-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13365293s
Jan 20 12:34:39.595: INFO: Pod "pod-projected-configmaps-34b2d097-3b81-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.659231496s
Jan 20 12:34:41.614: INFO: Pod "pod-projected-configmaps-34b2d097-3b81-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.678536903s
STEP: Saw pod success
Jan 20 12:34:41.614: INFO: Pod "pod-projected-configmaps-34b2d097-3b81-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:34:41.619: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-34b2d097-3b81-11ea-8bde-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 20 12:34:42.222: INFO: Waiting for pod pod-projected-configmaps-34b2d097-3b81-11ea-8bde-0242ac110005 to disappear
Jan 20 12:34:42.378: INFO: Pod pod-projected-configmaps-34b2d097-3b81-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:34:42.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-42glg" for this suite.
Jan 20 12:34:48.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:34:48.774: INFO: namespace: e2e-tests-projected-42glg, resource: bindings, ignored listing per whitelist
Jan 20 12:34:48.783: INFO: namespace e2e-tests-projected-42glg deletion completed in 6.389320969s

• [SLOW TEST:18.079 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:34:48.783: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 20 12:34:49.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Jan 20 12:34:49.122: INFO: stderr: ""
Jan 20 12:34:49.123: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T15:53:48Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Jan 20 12:34:49.128: INFO: Not supported for server versions before "1.13.12"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:34:49.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-6s42g" for this suite.
Jan 20 12:34:55.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:34:55.388: INFO: namespace: e2e-tests-kubectl-6s42g, resource: bindings, ignored listing per whitelist
Jan 20 12:34:55.401: INFO: namespace e2e-tests-kubectl-6s42g deletion completed in 6.25220955s

S [SKIPPING] [6.619 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

    Jan 20 12:34:49.128: Not supported for server versions before "1.13.12"

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
S
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:34:55.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jan 20 12:34:56.196: INFO: Waiting up to 5m0s for pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-f2nh6" in namespace "e2e-tests-svcaccounts-ptqt8" to be "success or failure"
Jan 20 12:34:56.213: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-f2nh6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.886591ms
Jan 20 12:34:58.229: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-f2nh6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032908912s
Jan 20 12:35:00.255: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-f2nh6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058330511s
Jan 20 12:35:02.271: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-f2nh6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074618215s
Jan 20 12:35:04.337: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-f2nh6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.14081203s
Jan 20 12:35:06.346: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-f2nh6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.149634771s
Jan 20 12:35:08.392: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-f2nh6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.196033501s
Jan 20 12:35:10.505: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-f2nh6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.308779477s
Jan 20 12:35:12.537: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-f2nh6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.340801767s
STEP: Saw pod success
Jan 20 12:35:12.537: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-f2nh6" satisfied condition "success or failure"
Jan 20 12:35:12.568: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-f2nh6 container token-test: 
STEP: delete the pod
Jan 20 12:35:12.690: INFO: Waiting for pod pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-f2nh6 to disappear
Jan 20 12:35:12.698: INFO: Pod pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-f2nh6 no longer exists
STEP: Creating a pod to test consume service account root CA
Jan 20 12:35:12.711: INFO: Waiting up to 5m0s for pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-v9c77" in namespace "e2e-tests-svcaccounts-ptqt8" to be "success or failure"
Jan 20 12:35:12.721: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-v9c77": Phase="Pending", Reason="", readiness=false. Elapsed: 9.18536ms
Jan 20 12:35:15.014: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-v9c77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302238247s
Jan 20 12:35:17.032: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-v9c77": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320406173s
Jan 20 12:35:19.089: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-v9c77": Phase="Pending", Reason="", readiness=false. Elapsed: 6.377873549s
Jan 20 12:35:21.206: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-v9c77": Phase="Pending", Reason="", readiness=false. Elapsed: 8.494354672s
Jan 20 12:35:23.236: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-v9c77": Phase="Pending", Reason="", readiness=false. Elapsed: 10.524662814s
Jan 20 12:35:25.256: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-v9c77": Phase="Pending", Reason="", readiness=false. Elapsed: 12.544600758s
Jan 20 12:35:27.269: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-v9c77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.557683198s
STEP: Saw pod success
Jan 20 12:35:27.269: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-v9c77" satisfied condition "success or failure"
Jan 20 12:35:27.275: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-v9c77 container root-ca-test: 
STEP: delete the pod
Jan 20 12:35:27.439: INFO: Waiting for pod pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-v9c77 to disappear
Jan 20 12:35:27.447: INFO: Pod pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-v9c77 no longer exists
STEP: Creating a pod to test consume service account namespace
Jan 20 12:35:27.471: INFO: Waiting up to 5m0s for pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-b5b5n" in namespace "e2e-tests-svcaccounts-ptqt8" to be "success or failure"
Jan 20 12:35:27.485: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-b5b5n": Phase="Pending", Reason="", readiness=false. Elapsed: 14.450062ms
Jan 20 12:35:29.518: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-b5b5n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047482776s
Jan 20 12:35:31.538: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-b5b5n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067486584s
Jan 20 12:35:34.366: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-b5b5n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.895707125s
Jan 20 12:35:36.376: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-b5b5n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.90501799s
Jan 20 12:35:38.451: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-b5b5n": Phase="Pending", Reason="", readiness=false. Elapsed: 10.980699425s
Jan 20 12:35:40.482: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-b5b5n": Phase="Pending", Reason="", readiness=false. Elapsed: 13.01135454s
Jan 20 12:35:42.508: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-b5b5n": Phase="Pending", Reason="", readiness=false. Elapsed: 15.037084667s
Jan 20 12:35:44.647: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-b5b5n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.176427708s
STEP: Saw pod success
Jan 20 12:35:44.647: INFO: Pod "pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-b5b5n" satisfied condition "success or failure"
Jan 20 12:35:44.674: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-b5b5n container namespace-test: 
STEP: delete the pod
Jan 20 12:35:44.847: INFO: Waiting for pod pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-b5b5n to disappear
Jan 20 12:35:44.901: INFO: Pod pod-service-account-43beaaa5-3b81-11ea-8bde-0242ac110005-b5b5n no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:35:44.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-ptqt8" for this suite.
Jan 20 12:35:53.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:35:53.119: INFO: namespace: e2e-tests-svcaccounts-ptqt8, resource: bindings, ignored listing per whitelist
Jan 20 12:35:53.295: INFO: namespace e2e-tests-svcaccounts-ptqt8 deletion completed in 8.353892314s

• [SLOW TEST:57.894 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:35:53.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 20 12:36:13.852: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 12:36:13.968: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 12:36:15.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 12:36:15.995: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 12:36:17.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 12:36:17.987: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 12:36:19.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 12:36:19.983: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 12:36:21.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 12:36:21.997: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 12:36:23.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 12:36:23.993: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 12:36:25.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 12:36:25.996: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 12:36:27.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 12:36:27.986: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 12:36:29.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 12:36:30.001: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 12:36:31.970: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 12:36:31.994: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 12:36:33.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 12:36:33.991: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 12:36:35.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 12:36:36.026: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 12:36:37.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 12:36:37.988: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 12:36:39.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 12:36:39.986: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 12:36:41.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 12:36:41.992: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 20 12:36:43.968: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 20 12:36:44.032: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:36:44.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-l9sff" for this suite.
Jan 20 12:37:08.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:37:08.522: INFO: namespace: e2e-tests-container-lifecycle-hook-l9sff, resource: bindings, ignored listing per whitelist
Jan 20 12:37:08.746: INFO: namespace e2e-tests-container-lifecycle-hook-l9sff deletion completed in 24.571592821s

• [SLOW TEST:75.450 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:37:08.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 20 12:37:09.034: INFO: Waiting up to 5m0s for pod "pod-92ef2159-3b81-11ea-8bde-0242ac110005" in namespace "e2e-tests-emptydir-ttwn6" to be "success or failure"
Jan 20 12:37:09.045: INFO: Pod "pod-92ef2159-3b81-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.343334ms
Jan 20 12:37:11.066: INFO: Pod "pod-92ef2159-3b81-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032005863s
Jan 20 12:37:13.078: INFO: Pod "pod-92ef2159-3b81-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044406727s
Jan 20 12:37:15.096: INFO: Pod "pod-92ef2159-3b81-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061951982s
Jan 20 12:37:17.126: INFO: Pod "pod-92ef2159-3b81-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091962047s
Jan 20 12:37:19.158: INFO: Pod "pod-92ef2159-3b81-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.124452663s
STEP: Saw pod success
Jan 20 12:37:19.158: INFO: Pod "pod-92ef2159-3b81-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:37:19.178: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-92ef2159-3b81-11ea-8bde-0242ac110005 container test-container: 
STEP: delete the pod
Jan 20 12:37:19.634: INFO: Waiting for pod pod-92ef2159-3b81-11ea-8bde-0242ac110005 to disappear
Jan 20 12:37:19.693: INFO: Pod pod-92ef2159-3b81-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:37:19.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ttwn6" for this suite.
Jan 20 12:37:25.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:37:26.001: INFO: namespace: e2e-tests-emptydir-ttwn6, resource: bindings, ignored listing per whitelist
Jan 20 12:37:26.005: INFO: namespace e2e-tests-emptydir-ttwn6 deletion completed in 6.30478695s

• [SLOW TEST:17.259 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:37:26.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-nthll
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-nthll
STEP: Deleting pre-stop pod
Jan 20 12:37:49.304: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:37:49.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-nthll" for this suite.
Jan 20 12:38:29.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:38:29.775: INFO: namespace: e2e-tests-prestop-nthll, resource: bindings, ignored listing per whitelist
Jan 20 12:38:29.786: INFO: namespace e2e-tests-prestop-nthll deletion completed in 40.455786366s

• [SLOW TEST:63.781 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:38:29.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-9ttm
STEP: Creating a pod to test atomic-volume-subpath
Jan 20 12:38:30.219: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-9ttm" in namespace "e2e-tests-subpath-xwbvn" to be "success or failure"
Jan 20 12:38:30.243: INFO: Pod "pod-subpath-test-secret-9ttm": Phase="Pending", Reason="", readiness=false. Elapsed: 23.498256ms
Jan 20 12:38:32.257: INFO: Pod "pod-subpath-test-secret-9ttm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037262592s
Jan 20 12:38:34.276: INFO: Pod "pod-subpath-test-secret-9ttm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056946961s
Jan 20 12:38:36.740: INFO: Pod "pod-subpath-test-secret-9ttm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.520348365s
Jan 20 12:38:38.781: INFO: Pod "pod-subpath-test-secret-9ttm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.561785322s
Jan 20 12:38:40.791: INFO: Pod "pod-subpath-test-secret-9ttm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.571288997s
Jan 20 12:38:42.809: INFO: Pod "pod-subpath-test-secret-9ttm": Phase="Pending", Reason="", readiness=false. Elapsed: 12.589573122s
Jan 20 12:38:44.837: INFO: Pod "pod-subpath-test-secret-9ttm": Phase="Running", Reason="", readiness=true. Elapsed: 14.618049569s
Jan 20 12:38:46.888: INFO: Pod "pod-subpath-test-secret-9ttm": Phase="Running", Reason="", readiness=false. Elapsed: 16.66897295s
Jan 20 12:38:48.907: INFO: Pod "pod-subpath-test-secret-9ttm": Phase="Running", Reason="", readiness=false. Elapsed: 18.687609858s
Jan 20 12:38:50.924: INFO: Pod "pod-subpath-test-secret-9ttm": Phase="Running", Reason="", readiness=false. Elapsed: 20.704550013s
Jan 20 12:38:52.981: INFO: Pod "pod-subpath-test-secret-9ttm": Phase="Running", Reason="", readiness=false. Elapsed: 22.761378096s
Jan 20 12:38:55.000: INFO: Pod "pod-subpath-test-secret-9ttm": Phase="Running", Reason="", readiness=false. Elapsed: 24.780162968s
Jan 20 12:38:57.065: INFO: Pod "pod-subpath-test-secret-9ttm": Phase="Running", Reason="", readiness=false. Elapsed: 26.84569479s
Jan 20 12:38:59.085: INFO: Pod "pod-subpath-test-secret-9ttm": Phase="Running", Reason="", readiness=false. Elapsed: 28.865194478s
Jan 20 12:39:01.095: INFO: Pod "pod-subpath-test-secret-9ttm": Phase="Running", Reason="", readiness=false. Elapsed: 30.875608168s
Jan 20 12:39:03.115: INFO: Pod "pod-subpath-test-secret-9ttm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.895184505s
STEP: Saw pod success
Jan 20 12:39:03.115: INFO: Pod "pod-subpath-test-secret-9ttm" satisfied condition "success or failure"
Jan 20 12:39:03.123: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-subpath-test-secret-9ttm container test-container-subpath-secret-9ttm: 
STEP: delete the pod
Jan 20 12:39:03.691: INFO: Waiting for pod pod-subpath-test-secret-9ttm to disappear
Jan 20 12:39:04.194: INFO: Pod pod-subpath-test-secret-9ttm no longer exists
STEP: Deleting pod pod-subpath-test-secret-9ttm
Jan 20 12:39:04.194: INFO: Deleting pod "pod-subpath-test-secret-9ttm" in namespace "e2e-tests-subpath-xwbvn"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:39:04.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-xwbvn" for this suite.
Jan 20 12:39:10.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:39:10.404: INFO: namespace: e2e-tests-subpath-xwbvn, resource: bindings, ignored listing per whitelist
Jan 20 12:39:10.430: INFO: namespace e2e-tests-subpath-xwbvn deletion completed in 6.213670109s

• [SLOW TEST:40.644 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:39:10.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jan 20 12:39:19.121: INFO: Waiting up to 5m0s for pod "client-envvars-e072914e-3b81-11ea-8bde-0242ac110005" in namespace "e2e-tests-pods-rrpgn" to be "success or failure"
Jan 20 12:39:19.284: INFO: Pod "client-envvars-e072914e-3b81-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 163.130657ms
Jan 20 12:39:21.300: INFO: Pod "client-envvars-e072914e-3b81-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178965295s
Jan 20 12:39:23.318: INFO: Pod "client-envvars-e072914e-3b81-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197447005s
Jan 20 12:39:25.695: INFO: Pod "client-envvars-e072914e-3b81-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.574204812s
Jan 20 12:39:27.737: INFO: Pod "client-envvars-e072914e-3b81-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.61593659s
Jan 20 12:39:29.758: INFO: Pod "client-envvars-e072914e-3b81-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.63705199s
STEP: Saw pod success
Jan 20 12:39:29.758: INFO: Pod "client-envvars-e072914e-3b81-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:39:29.766: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod client-envvars-e072914e-3b81-11ea-8bde-0242ac110005 container env3cont: 
STEP: delete the pod
Jan 20 12:39:30.183: INFO: Waiting for pod client-envvars-e072914e-3b81-11ea-8bde-0242ac110005 to disappear
Jan 20 12:39:30.197: INFO: Pod client-envvars-e072914e-3b81-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:39:30.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-rrpgn" for this suite.
Jan 20 12:40:28.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:40:28.351: INFO: namespace: e2e-tests-pods-rrpgn, resource: bindings, ignored listing per whitelist
Jan 20 12:40:28.407: INFO: namespace e2e-tests-pods-rrpgn deletion completed in 58.203972885s

• [SLOW TEST:77.977 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:40:28.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jan 20 12:40:28.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-vnmcp'
Jan 20 12:40:31.485: INFO: stderr: ""
Jan 20 12:40:31.485: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 20 12:40:31.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vnmcp'
Jan 20 12:40:31.610: INFO: stderr: ""
Jan 20 12:40:31.610: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Jan 20 12:40:36.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vnmcp'
Jan 20 12:40:36.785: INFO: stderr: ""
Jan 20 12:40:36.785: INFO: stdout: "update-demo-nautilus-8mwcn update-demo-nautilus-cbrkf "
Jan 20 12:40:36.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8mwcn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vnmcp'
Jan 20 12:40:36.907: INFO: stderr: ""
Jan 20 12:40:36.907: INFO: stdout: ""
Jan 20 12:40:36.907: INFO: update-demo-nautilus-8mwcn is created but not running
Jan 20 12:40:41.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vnmcp'
Jan 20 12:40:42.094: INFO: stderr: ""
Jan 20 12:40:42.094: INFO: stdout: "update-demo-nautilus-8mwcn update-demo-nautilus-cbrkf "
Jan 20 12:40:42.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8mwcn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vnmcp'
Jan 20 12:40:42.284: INFO: stderr: ""
Jan 20 12:40:42.284: INFO: stdout: ""
Jan 20 12:40:42.284: INFO: update-demo-nautilus-8mwcn is created but not running
Jan 20 12:40:47.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-vnmcp'
Jan 20 12:40:47.452: INFO: stderr: ""
Jan 20 12:40:47.452: INFO: stdout: "update-demo-nautilus-8mwcn update-demo-nautilus-cbrkf "
Jan 20 12:40:47.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8mwcn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vnmcp'
Jan 20 12:40:47.587: INFO: stderr: ""
Jan 20 12:40:47.587: INFO: stdout: "true"
Jan 20 12:40:47.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8mwcn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vnmcp'
Jan 20 12:40:47.705: INFO: stderr: ""
Jan 20 12:40:47.705: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 20 12:40:47.705: INFO: validating pod update-demo-nautilus-8mwcn
Jan 20 12:40:47.728: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 20 12:40:47.728: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 20 12:40:47.728: INFO: update-demo-nautilus-8mwcn is verified up and running
Jan 20 12:40:47.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cbrkf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vnmcp'
Jan 20 12:40:47.872: INFO: stderr: ""
Jan 20 12:40:47.872: INFO: stdout: "true"
Jan 20 12:40:47.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cbrkf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-vnmcp'
Jan 20 12:40:47.984: INFO: stderr: ""
Jan 20 12:40:47.984: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 20 12:40:47.984: INFO: validating pod update-demo-nautilus-cbrkf
Jan 20 12:40:47.994: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 20 12:40:47.994: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 20 12:40:47.994: INFO: update-demo-nautilus-cbrkf is verified up and running
STEP: using delete to clean up resources
Jan 20 12:40:47.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-vnmcp'
Jan 20 12:40:48.179: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 20 12:40:48.179: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 20 12:40:48.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-vnmcp'
Jan 20 12:40:48.396: INFO: stderr: "No resources found.\n"
Jan 20 12:40:48.397: INFO: stdout: ""
Jan 20 12:40:48.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-vnmcp -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 20 12:40:48.622: INFO: stderr: ""
Jan 20 12:40:48.623: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:40:48.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-vnmcp" for this suite.
Jan 20 12:41:12.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:41:12.896: INFO: namespace: e2e-tests-kubectl-vnmcp, resource: bindings, ignored listing per whitelist
Jan 20 12:41:12.977: INFO: namespace e2e-tests-kubectl-vnmcp deletion completed in 24.327627184s

• [SLOW TEST:44.570 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:41:12.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 20 12:41:13.261: INFO: Waiting up to 5m0s for pod "downwardapi-volume-24814ae6-3b82-11ea-8bde-0242ac110005" in namespace "e2e-tests-downward-api-9nd7k" to be "success or failure"
Jan 20 12:41:13.378: INFO: Pod "downwardapi-volume-24814ae6-3b82-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 116.286244ms
Jan 20 12:41:15.395: INFO: Pod "downwardapi-volume-24814ae6-3b82-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133553667s
Jan 20 12:41:17.415: INFO: Pod "downwardapi-volume-24814ae6-3b82-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153309998s
Jan 20 12:41:19.431: INFO: Pod "downwardapi-volume-24814ae6-3b82-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169281452s
Jan 20 12:41:22.129: INFO: Pod "downwardapi-volume-24814ae6-3b82-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.868254864s
Jan 20 12:41:24.167: INFO: Pod "downwardapi-volume-24814ae6-3b82-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.906102304s
STEP: Saw pod success
Jan 20 12:41:24.167: INFO: Pod "downwardapi-volume-24814ae6-3b82-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:41:24.175: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-24814ae6-3b82-11ea-8bde-0242ac110005 container client-container: 
STEP: delete the pod
Jan 20 12:41:24.866: INFO: Waiting for pod downwardapi-volume-24814ae6-3b82-11ea-8bde-0242ac110005 to disappear
Jan 20 12:41:24.902: INFO: Pod downwardapi-volume-24814ae6-3b82-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:41:24.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9nd7k" for this suite.
Jan 20 12:41:31.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:41:31.098: INFO: namespace: e2e-tests-downward-api-9nd7k, resource: bindings, ignored listing per whitelist
Jan 20 12:41:31.193: INFO: namespace e2e-tests-downward-api-9nd7k deletion completed in 6.280128861s

• [SLOW TEST:18.215 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:41:31.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:41:43.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-xmpzn" for this suite.
Jan 20 12:41:49.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:41:49.690: INFO: namespace: e2e-tests-kubelet-test-xmpzn, resource: bindings, ignored listing per whitelist
Jan 20 12:41:49.702: INFO: namespace e2e-tests-kubelet-test-xmpzn deletion completed in 6.231593917s

• [SLOW TEST:18.509 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:41:49.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-3a710dcf-3b82-11ea-8bde-0242ac110005
STEP: Creating secret with name s-test-opt-upd-3a710f0a-3b82-11ea-8bde-0242ac110005
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-3a710dcf-3b82-11ea-8bde-0242ac110005
STEP: Updating secret s-test-opt-upd-3a710f0a-3b82-11ea-8bde-0242ac110005
STEP: Creating secret with name s-test-opt-create-3a710f35-3b82-11ea-8bde-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:42:04.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-69w75" for this suite.
Jan 20 12:42:28.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:42:29.001: INFO: namespace: e2e-tests-projected-69w75, resource: bindings, ignored listing per whitelist
Jan 20 12:42:29.062: INFO: namespace e2e-tests-projected-69w75 deletion completed in 24.41138408s

• [SLOW TEST:39.359 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:42:29.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-51d37365-3b82-11ea-8bde-0242ac110005
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:42:41.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-2dw8x" for this suite.
Jan 20 12:43:05.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:43:05.736: INFO: namespace: e2e-tests-configmap-2dw8x, resource: bindings, ignored listing per whitelist
Jan 20 12:43:05.772: INFO: namespace e2e-tests-configmap-2dw8x deletion completed in 24.32679239s

• [SLOW TEST:36.710 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:43:05.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jan 20 12:43:16.335: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:43:42.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-d59b5" for this suite.
Jan 20 12:43:48.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:43:48.787: INFO: namespace: e2e-tests-namespaces-d59b5, resource: bindings, ignored listing per whitelist
Jan 20 12:43:48.899: INFO: namespace e2e-tests-namespaces-d59b5 deletion completed in 6.164044013s
STEP: Destroying namespace "e2e-tests-nsdeletetest-vthzz" for this suite.
Jan 20 12:43:48.902: INFO: Namespace e2e-tests-nsdeletetest-vthzz was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-29kqh" for this suite.
Jan 20 12:43:54.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:43:55.025: INFO: namespace: e2e-tests-nsdeletetest-29kqh, resource: bindings, ignored listing per whitelist
Jan 20 12:43:55.079: INFO: namespace e2e-tests-nsdeletetest-29kqh deletion completed in 6.176597063s

• [SLOW TEST:49.306 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:43:55.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-8514e094-3b82-11ea-8bde-0242ac110005
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-8514e094-3b82-11ea-8bde-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:44:07.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-xd924" for this suite.
Jan 20 12:44:31.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:44:31.679: INFO: namespace: e2e-tests-configmap-xd924, resource: bindings, ignored listing per whitelist
Jan 20 12:44:31.735: INFO: namespace e2e-tests-configmap-xd924 deletion completed in 24.207656399s

• [SLOW TEST:36.655 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:44:31.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-9af0f00b-3b82-11ea-8bde-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-9af0f10c-3b82-11ea-8bde-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-9af0f00b-3b82-11ea-8bde-0242ac110005
STEP: Updating configmap cm-test-opt-upd-9af0f10c-3b82-11ea-8bde-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-9af0f13d-3b82-11ea-8bde-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:46:19.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wf65w" for this suite.
Jan 20 12:46:43.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:46:43.299: INFO: namespace: e2e-tests-projected-wf65w, resource: bindings, ignored listing per whitelist
Jan 20 12:46:43.325: INFO: namespace e2e-tests-projected-wf65w deletion completed in 24.315477143s

• [SLOW TEST:131.590 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:46:43.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-e95a0b70-3b82-11ea-8bde-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 20 12:46:43.524: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e95c5bab-3b82-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-rj6tg" to be "success or failure"
Jan 20 12:46:43.536: INFO: Pod "pod-projected-configmaps-e95c5bab-3b82-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.670401ms
Jan 20 12:46:46.006: INFO: Pod "pod-projected-configmaps-e95c5bab-3b82-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.481262277s
Jan 20 12:46:48.037: INFO: Pod "pod-projected-configmaps-e95c5bab-3b82-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.51222766s
Jan 20 12:46:50.062: INFO: Pod "pod-projected-configmaps-e95c5bab-3b82-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.537440877s
Jan 20 12:46:52.079: INFO: Pod "pod-projected-configmaps-e95c5bab-3b82-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.555089205s
Jan 20 12:46:54.091: INFO: Pod "pod-projected-configmaps-e95c5bab-3b82-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.566378763s
STEP: Saw pod success
Jan 20 12:46:54.091: INFO: Pod "pod-projected-configmaps-e95c5bab-3b82-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:46:54.096: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-e95c5bab-3b82-11ea-8bde-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 20 12:46:54.472: INFO: Waiting for pod pod-projected-configmaps-e95c5bab-3b82-11ea-8bde-0242ac110005 to disappear
Jan 20 12:46:54.495: INFO: Pod pod-projected-configmaps-e95c5bab-3b82-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:46:54.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-rj6tg" for this suite.
Jan 20 12:47:00.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:47:00.804: INFO: namespace: e2e-tests-projected-rj6tg, resource: bindings, ignored listing per whitelist
Jan 20 12:47:00.885: INFO: namespace e2e-tests-projected-rj6tg deletion completed in 6.370775624s

• [SLOW TEST:17.559 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:47:00.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 20 12:50:04.289: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 12:50:04.439: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 12:50:06.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 12:50:06.453: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 12:50:08.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 12:50:08.473: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 12:50:10.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 12:50:10.467: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 12:50:12.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 12:50:12.458: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 12:50:14.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 12:50:14.460: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 12:50:16.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 12:50:16.466: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 12:50:18.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 12:50:18.472: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 12:50:20.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 12:50:20.461: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 12:50:22.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 12:50:22.469: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 12:50:24.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 12:50:24.464: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 12:50:26.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 12:50:26.473: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 12:50:28.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 12:50:28.634: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 12:50:30.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 12:50:30.465: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 12:50:32.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 12:50:32.477: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 12:50:34.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 12:50:34.459: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 12:50:36.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 12:50:36.461: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 12:50:38.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 12:50:38.502: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 20 12:50:40.440: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 20 12:50:40.500: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:50:40.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-v42n6" for this suite.
Jan 20 12:51:04.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:51:04.703: INFO: namespace: e2e-tests-container-lifecycle-hook-v42n6, resource: bindings, ignored listing per whitelist
Jan 20 12:51:04.763: INFO: namespace e2e-tests-container-lifecycle-hook-v42n6 deletion completed in 24.240419228s

• [SLOW TEST:243.878 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:51:04.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Jan 20 12:51:15.178: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-8530507c-3b83-11ea-8bde-0242ac110005", GenerateName:"", Namespace:"e2e-tests-pods-j2css", SelfLink:"/api/v1/namespaces/e2e-tests-pods-j2css/pods/pod-submit-remove-8530507c-3b83-11ea-8bde-0242ac110005", UID:"85425b59-3b83-11ea-a994-fa163e34d433", ResourceVersion:"18856149", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715121465, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"951234792"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-ktd7x", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000feccc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-ktd7x", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00171e448), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-server-hu5at5svl7ps", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001c42420), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00171e4f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00171e610)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00171e618), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00171e61c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715121465, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715121474, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715121474, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715121465, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.1.240", PodIP:"10.32.0.4", StartTime:(*v1.Time)(0xc000c06de0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000c06e60), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://692e838ccd38a7be0a9980e622ea37e0bc1f4e5e2d69d2e3dea37d06eda277d0"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:51:32.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-j2css" for this suite.
Jan 20 12:51:38.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:51:39.102: INFO: namespace: e2e-tests-pods-j2css, resource: bindings, ignored listing per whitelist
Jan 20 12:51:39.122: INFO: namespace e2e-tests-pods-j2css deletion completed in 6.405935797s

• [SLOW TEST:34.358 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:51:39.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 20 12:51:39.435: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99bb4071-3b83-11ea-8bde-0242ac110005" in namespace "e2e-tests-downward-api-rj66k" to be "success or failure"
Jan 20 12:51:39.580: INFO: Pod "downwardapi-volume-99bb4071-3b83-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 145.214085ms
Jan 20 12:51:41.618: INFO: Pod "downwardapi-volume-99bb4071-3b83-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183190198s
Jan 20 12:51:43.630: INFO: Pod "downwardapi-volume-99bb4071-3b83-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.195756525s
Jan 20 12:51:45.799: INFO: Pod "downwardapi-volume-99bb4071-3b83-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.364013268s
Jan 20 12:51:47.824: INFO: Pod "downwardapi-volume-99bb4071-3b83-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.389242113s
Jan 20 12:51:49.841: INFO: Pod "downwardapi-volume-99bb4071-3b83-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.406114685s
STEP: Saw pod success
Jan 20 12:51:49.841: INFO: Pod "downwardapi-volume-99bb4071-3b83-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:51:49.848: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-99bb4071-3b83-11ea-8bde-0242ac110005 container client-container: 
STEP: delete the pod
Jan 20 12:51:50.453: INFO: Waiting for pod downwardapi-volume-99bb4071-3b83-11ea-8bde-0242ac110005 to disappear
Jan 20 12:51:50.678: INFO: Pod downwardapi-volume-99bb4071-3b83-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:51:50.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-rj66k" for this suite.
Jan 20 12:51:56.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:51:56.957: INFO: namespace: e2e-tests-downward-api-rj66k, resource: bindings, ignored listing per whitelist
Jan 20 12:51:57.032: INFO: namespace e2e-tests-downward-api-rj66k deletion completed in 6.342641004s

• [SLOW TEST:17.910 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:51:57.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-24zk7
Jan 20 12:52:07.313: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-24zk7
STEP: checking the pod's current state and verifying that restartCount is present
Jan 20 12:52:07.321: INFO: Initial restart count of pod liveness-http is 0
Jan 20 12:52:27.527: INFO: Restart count of pod e2e-tests-container-probe-24zk7/liveness-http is now 1 (20.206151957s elapsed)
Jan 20 12:52:47.715: INFO: Restart count of pod e2e-tests-container-probe-24zk7/liveness-http is now 2 (40.393932051s elapsed)
Jan 20 12:53:08.063: INFO: Restart count of pod e2e-tests-container-probe-24zk7/liveness-http is now 3 (1m0.742491571s elapsed)
Jan 20 12:53:26.302: INFO: Restart count of pod e2e-tests-container-probe-24zk7/liveness-http is now 4 (1m18.981126822s elapsed)
Jan 20 12:54:29.532: INFO: Restart count of pod e2e-tests-container-probe-24zk7/liveness-http is now 5 (2m22.21150617s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:54:29.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-24zk7" for this suite.
Jan 20 12:54:39.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:54:39.968: INFO: namespace: e2e-tests-container-probe-24zk7, resource: bindings, ignored listing per whitelist
Jan 20 12:54:39.986: INFO: namespace e2e-tests-container-probe-24zk7 deletion completed in 10.279778778s

• [SLOW TEST:162.953 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:54:39.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 20 12:54:40.897: INFO: Waiting up to 5m0s for pod "pod-05ddbbf5-3b84-11ea-8bde-0242ac110005" in namespace "e2e-tests-emptydir-fxrwr" to be "success or failure"
Jan 20 12:54:40.925: INFO: Pod "pod-05ddbbf5-3b84-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 28.238117ms
Jan 20 12:54:43.072: INFO: Pod "pod-05ddbbf5-3b84-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174736264s
Jan 20 12:54:45.086: INFO: Pod "pod-05ddbbf5-3b84-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189029573s
Jan 20 12:54:47.103: INFO: Pod "pod-05ddbbf5-3b84-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206335613s
Jan 20 12:54:49.779: INFO: Pod "pod-05ddbbf5-3b84-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.881995403s
Jan 20 12:54:51.916: INFO: Pod "pod-05ddbbf5-3b84-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 11.018791085s
Jan 20 12:54:53.955: INFO: Pod "pod-05ddbbf5-3b84-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 13.057872983s
Jan 20 12:54:56.169: INFO: Pod "pod-05ddbbf5-3b84-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 15.272246128s
Jan 20 12:54:58.204: INFO: Pod "pod-05ddbbf5-3b84-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.307028541s
STEP: Saw pod success
Jan 20 12:54:58.204: INFO: Pod "pod-05ddbbf5-3b84-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:54:58.217: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-05ddbbf5-3b84-11ea-8bde-0242ac110005 container test-container: 
STEP: delete the pod
Jan 20 12:54:59.728: INFO: Waiting for pod pod-05ddbbf5-3b84-11ea-8bde-0242ac110005 to disappear
Jan 20 12:54:59.738: INFO: Pod pod-05ddbbf5-3b84-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:54:59.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-fxrwr" for this suite.
Jan 20 12:55:07.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:55:08.012: INFO: namespace: e2e-tests-emptydir-fxrwr, resource: bindings, ignored listing per whitelist
Jan 20 12:55:08.107: INFO: namespace e2e-tests-emptydir-fxrwr deletion completed in 8.346157307s

• [SLOW TEST:28.121 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:55:08.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jan 20 12:55:23.179: INFO: Successfully updated pod "annotationupdate16551ebb-3b84-11ea-8bde-0242ac110005"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:55:25.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-zxv9r" for this suite.
Jan 20 12:55:49.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:55:49.713: INFO: namespace: e2e-tests-projected-zxv9r, resource: bindings, ignored listing per whitelist
Jan 20 12:55:49.756: INFO: namespace e2e-tests-projected-zxv9r deletion completed in 24.196762413s

• [SLOW TEST:41.649 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:55:49.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-2f1686ce-3b84-11ea-8bde-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 20 12:55:50.024: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2f17d029-3b84-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-wgvmg" to be "success or failure"
Jan 20 12:55:50.033: INFO: Pod "pod-projected-configmaps-2f17d029-3b84-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.595701ms
Jan 20 12:55:52.055: INFO: Pod "pod-projected-configmaps-2f17d029-3b84-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031060266s
Jan 20 12:55:54.088: INFO: Pod "pod-projected-configmaps-2f17d029-3b84-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064053802s
Jan 20 12:55:56.254: INFO: Pod "pod-projected-configmaps-2f17d029-3b84-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.230039121s
Jan 20 12:55:58.282: INFO: Pod "pod-projected-configmaps-2f17d029-3b84-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.257988923s
Jan 20 12:56:00.311: INFO: Pod "pod-projected-configmaps-2f17d029-3b84-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.286807206s
STEP: Saw pod success
Jan 20 12:56:00.311: INFO: Pod "pod-projected-configmaps-2f17d029-3b84-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 12:56:00.334: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-2f17d029-3b84-11ea-8bde-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 20 12:56:00.590: INFO: Waiting for pod pod-projected-configmaps-2f17d029-3b84-11ea-8bde-0242ac110005 to disappear
Jan 20 12:56:00.624: INFO: Pod pod-projected-configmaps-2f17d029-3b84-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:56:00.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wgvmg" for this suite.
Jan 20 12:56:06.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:56:07.067: INFO: namespace: e2e-tests-projected-wgvmg, resource: bindings, ignored listing per whitelist
Jan 20 12:56:07.094: INFO: namespace e2e-tests-projected-wgvmg deletion completed in 6.435908782s

• [SLOW TEST:17.337 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:56:07.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-3964afd2-3b84-11ea-8bde-0242ac110005
STEP: Creating configMap with name cm-test-opt-upd-3964b0bc-3b84-11ea-8bde-0242ac110005
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-3964afd2-3b84-11ea-8bde-0242ac110005
STEP: Updating configmap cm-test-opt-upd-3964b0bc-3b84-11ea-8bde-0242ac110005
STEP: Creating configMap with name cm-test-opt-create-3964b115-3b84-11ea-8bde-0242ac110005
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 12:56:31.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-wdjpp" for this suite.
Jan 20 12:56:55.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 12:56:55.841: INFO: namespace: e2e-tests-configmap-wdjpp, resource: bindings, ignored listing per whitelist
Jan 20 12:56:55.929: INFO: namespace e2e-tests-configmap-wdjpp deletion completed in 24.202126255s

• [SLOW TEST:48.835 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 12:56:55.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-pp42n
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-pp42n
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-pp42n
Jan 20 12:56:56.180: INFO: Found 0 stateful pods, waiting for 1
Jan 20 12:57:06.217: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Jan 20 12:57:16.195: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jan 20 12:57:16.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 20 12:57:17.102: INFO: stderr: "I0120 12:57:16.466603    3161 log.go:172] (0xc000138840) (0xc0006632c0) Create stream\nI0120 12:57:16.467256    3161 log.go:172] (0xc000138840) (0xc0006632c0) Stream added, broadcasting: 1\nI0120 12:57:16.482460    3161 log.go:172] (0xc000138840) Reply frame received for 1\nI0120 12:57:16.482716    3161 log.go:172] (0xc000138840) (0xc000746000) Create stream\nI0120 12:57:16.482775    3161 log.go:172] (0xc000138840) (0xc000746000) Stream added, broadcasting: 3\nI0120 12:57:16.485087    3161 log.go:172] (0xc000138840) Reply frame received for 3\nI0120 12:57:16.485135    3161 log.go:172] (0xc000138840) (0xc0007460a0) Create stream\nI0120 12:57:16.485151    3161 log.go:172] (0xc000138840) (0xc0007460a0) Stream added, broadcasting: 5\nI0120 12:57:16.493903    3161 log.go:172] (0xc000138840) Reply frame received for 5\nI0120 12:57:16.928779    3161 log.go:172] (0xc000138840) Data frame received for 3\nI0120 12:57:16.928894    3161 log.go:172] (0xc000746000) (3) Data frame handling\nI0120 12:57:16.928931    3161 log.go:172] (0xc000746000) (3) Data frame sent\nI0120 12:57:17.085351    3161 log.go:172] (0xc000138840) Data frame received for 1\nI0120 12:57:17.085670    3161 log.go:172] (0xc000138840) (0xc000746000) Stream removed, broadcasting: 3\nI0120 12:57:17.085724    3161 log.go:172] (0xc0006632c0) (1) Data frame handling\nI0120 12:57:17.085750    3161 log.go:172] (0xc000138840) (0xc0007460a0) Stream removed, broadcasting: 5\nI0120 12:57:17.085780    3161 log.go:172] (0xc0006632c0) (1) Data frame sent\nI0120 12:57:17.085787    3161 log.go:172] (0xc000138840) (0xc0006632c0) Stream removed, broadcasting: 1\nI0120 12:57:17.085822    3161 log.go:172] (0xc000138840) Go away received\nI0120 12:57:17.087230    3161 log.go:172] (0xc000138840) (0xc0006632c0) Stream removed, broadcasting: 1\nI0120 12:57:17.087258    3161 log.go:172] (0xc000138840) (0xc000746000) Stream removed, broadcasting: 3\nI0120 12:57:17.087275    3161 log.go:172] (0xc000138840) (0xc0007460a0) Stream removed, broadcasting: 5\n"
Jan 20 12:57:17.102: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 20 12:57:17.102: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 20 12:57:17.186: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 20 12:57:27.205: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 20 12:57:27.205: INFO: Waiting for statefulset status.replicas updated to 0
Jan 20 12:57:27.272: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 20 12:57:27.272: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:56:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:56:56 +0000 UTC  }]
Jan 20 12:57:27.272: INFO: 
Jan 20 12:57:27.272: INFO: StatefulSet ss has not reached scale 3, at 1
Jan 20 12:57:28.337: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.983402815s
Jan 20 12:57:30.620: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.918148831s
Jan 20 12:57:32.089: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.635349909s
Jan 20 12:57:34.068: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.166908668s
Jan 20 12:57:35.212: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.187171611s
Jan 20 12:57:36.415: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.0430882s
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-pp42n
Jan 20 12:57:37.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 12:57:39.716: INFO: stderr: "I0120 12:57:38.028437    3183 log.go:172] (0xc000138630) (0xc000661400) Create stream\nI0120 12:57:38.028777    3183 log.go:172] (0xc000138630) (0xc000661400) Stream added, broadcasting: 1\nI0120 12:57:38.060787    3183 log.go:172] (0xc000138630) Reply frame received for 1\nI0120 12:57:38.061063    3183 log.go:172] (0xc000138630) (0xc000312000) Create stream\nI0120 12:57:38.061084    3183 log.go:172] (0xc000138630) (0xc000312000) Stream added, broadcasting: 3\nI0120 12:57:38.064441    3183 log.go:172] (0xc000138630) Reply frame received for 3\nI0120 12:57:38.064478    3183 log.go:172] (0xc000138630) (0xc00031a000) Create stream\nI0120 12:57:38.064487    3183 log.go:172] (0xc000138630) (0xc00031a000) Stream added, broadcasting: 5\nI0120 12:57:38.067480    3183 log.go:172] (0xc000138630) Reply frame received for 5\nI0120 12:57:39.108431    3183 log.go:172] (0xc000138630) Data frame received for 3\nI0120 12:57:39.108690    3183 log.go:172] (0xc000312000) (3) Data frame handling\nI0120 12:57:39.108772    3183 log.go:172] (0xc000312000) (3) Data frame sent\nI0120 12:57:39.702402    3183 log.go:172] (0xc000138630) (0xc00031a000) Stream removed, broadcasting: 5\nI0120 12:57:39.702529    3183 log.go:172] (0xc000138630) Data frame received for 1\nI0120 12:57:39.702588    3183 log.go:172] (0xc000138630) (0xc000312000) Stream removed, broadcasting: 3\nI0120 12:57:39.702618    3183 log.go:172] (0xc000661400) (1) Data frame handling\nI0120 12:57:39.702626    3183 log.go:172] (0xc000661400) (1) Data frame sent\nI0120 12:57:39.702631    3183 log.go:172] (0xc000138630) (0xc000661400) Stream removed, broadcasting: 1\nI0120 12:57:39.702637    3183 log.go:172] (0xc000138630) Go away received\nI0120 12:57:39.703551    3183 log.go:172] (0xc000138630) (0xc000661400) Stream removed, broadcasting: 1\nI0120 12:57:39.703571    3183 log.go:172] (0xc000138630) (0xc000312000) Stream removed, broadcasting: 3\nI0120 12:57:39.703578    3183 log.go:172] (0xc000138630) (0xc00031a000) Stream removed, broadcasting: 5\n"
Jan 20 12:57:39.716: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 20 12:57:39.716: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 20 12:57:39.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 12:57:40.388: INFO: rc: 1
Jan 20 12:57:40.388: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc00141b770 exit status 1   true [0xc00217c4e8 0xc00217c500 0xc00217c520] [0xc00217c4e8 0xc00217c500 0xc00217c520] [0xc00217c4f8 0xc00217c518] [0x935700 0x935700] 0xc0011a90e0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jan 20 12:57:50.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 12:57:51.676: INFO: stderr: "I0120 12:57:51.211504    3225 log.go:172] (0xc0005e24d0) (0xc000797220) Create stream\nI0120 12:57:51.212013    3225 log.go:172] (0xc0005e24d0) (0xc000797220) Stream added, broadcasting: 1\nI0120 12:57:51.221867    3225 log.go:172] (0xc0005e24d0) Reply frame received for 1\nI0120 12:57:51.221953    3225 log.go:172] (0xc0005e24d0) (0xc0007972c0) Create stream\nI0120 12:57:51.221962    3225 log.go:172] (0xc0005e24d0) (0xc0007972c0) Stream added, broadcasting: 3\nI0120 12:57:51.223379    3225 log.go:172] (0xc0005e24d0) Reply frame received for 3\nI0120 12:57:51.223405    3225 log.go:172] (0xc0005e24d0) (0xc000797360) Create stream\nI0120 12:57:51.223412    3225 log.go:172] (0xc0005e24d0) (0xc000797360) Stream added, broadcasting: 5\nI0120 12:57:51.224745    3225 log.go:172] (0xc0005e24d0) Reply frame received for 5\nI0120 12:57:51.349917    3225 log.go:172] (0xc0005e24d0) Data frame received for 3\nI0120 12:57:51.350032    3225 log.go:172] (0xc0007972c0) (3) Data frame handling\nI0120 12:57:51.350055    3225 log.go:172] (0xc0007972c0) (3) Data frame sent\nI0120 12:57:51.361278    3225 log.go:172] (0xc0005e24d0) Data frame received for 5\nI0120 12:57:51.361397    3225 log.go:172] (0xc000797360) (5) Data frame handling\nI0120 12:57:51.361536    3225 log.go:172] (0xc000797360) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0120 12:57:51.659279    3225 log.go:172] (0xc0005e24d0) Data frame received for 1\nI0120 12:57:51.659636    3225 log.go:172] (0xc0005e24d0) (0xc0007972c0) Stream removed, broadcasting: 3\nI0120 12:57:51.659927    3225 log.go:172] (0xc000797220) (1) Data frame handling\nI0120 12:57:51.659972    3225 log.go:172] (0xc000797220) (1) Data frame sent\nI0120 12:57:51.660002    3225 log.go:172] (0xc0005e24d0) (0xc000797220) Stream removed, broadcasting: 1\nI0120 12:57:51.661264    3225 log.go:172] (0xc0005e24d0) (0xc000797360) Stream removed, broadcasting: 5\nI0120 12:57:51.661379    3225 log.go:172] (0xc0005e24d0) (0xc000797220) Stream removed, broadcasting: 1\nI0120 12:57:51.661389    3225 log.go:172] (0xc0005e24d0) (0xc0007972c0) Stream removed, broadcasting: 3\nI0120 12:57:51.661395    3225 log.go:172] (0xc0005e24d0) (0xc000797360) Stream removed, broadcasting: 5\nI0120 12:57:51.661792    3225 log.go:172] (0xc0005e24d0) Go away received\n"
Jan 20 12:57:51.676: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 20 12:57:51.676: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 20 12:57:51.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 12:57:52.442: INFO: stderr: "I0120 12:57:52.097021    3247 log.go:172] (0xc0008a2210) (0xc00089c5a0) Create stream\nI0120 12:57:52.097226    3247 log.go:172] (0xc0008a2210) (0xc00089c5a0) Stream added, broadcasting: 1\nI0120 12:57:52.101866    3247 log.go:172] (0xc0008a2210) Reply frame received for 1\nI0120 12:57:52.101914    3247 log.go:172] (0xc0008a2210) (0xc0005d0e60) Create stream\nI0120 12:57:52.101930    3247 log.go:172] (0xc0008a2210) (0xc0005d0e60) Stream added, broadcasting: 3\nI0120 12:57:52.102990    3247 log.go:172] (0xc0008a2210) Reply frame received for 3\nI0120 12:57:52.103013    3247 log.go:172] (0xc0008a2210) (0xc00089c640) Create stream\nI0120 12:57:52.103024    3247 log.go:172] (0xc0008a2210) (0xc00089c640) Stream added, broadcasting: 5\nI0120 12:57:52.104216    3247 log.go:172] (0xc0008a2210) Reply frame received for 5\nI0120 12:57:52.262448    3247 log.go:172] (0xc0008a2210) Data frame received for 3\nI0120 12:57:52.262725    3247 log.go:172] (0xc0005d0e60) (3) Data frame handling\nI0120 12:57:52.262810    3247 log.go:172] (0xc0005d0e60) (3) Data frame sent\nI0120 12:57:52.262908    3247 log.go:172] (0xc0008a2210) Data frame received for 5\nI0120 12:57:52.262972    3247 log.go:172] (0xc00089c640) (5) Data frame handling\nI0120 12:57:52.263015    3247 log.go:172] (0xc00089c640) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0120 12:57:52.427043    3247 log.go:172] (0xc0008a2210) Data frame received for 1\nI0120 12:57:52.427198    3247 log.go:172] (0xc0008a2210) (0xc00089c640) Stream removed, broadcasting: 5\nI0120 12:57:52.427262    3247 log.go:172] (0xc00089c5a0) (1) Data frame handling\nI0120 12:57:52.427290    3247 log.go:172] (0xc00089c5a0) (1) Data frame sent\nI0120 12:57:52.427380    3247 log.go:172] (0xc0008a2210) (0xc0005d0e60) Stream removed, broadcasting: 3\nI0120 12:57:52.427537    3247 log.go:172] (0xc0008a2210) (0xc00089c5a0) Stream removed, broadcasting: 1\nI0120 12:57:52.427581    3247 log.go:172] (0xc0008a2210) Go away received\nI0120 12:57:52.428592    3247 log.go:172] (0xc0008a2210) (0xc00089c5a0) Stream removed, broadcasting: 1\nI0120 12:57:52.428619    3247 log.go:172] (0xc0008a2210) (0xc0005d0e60) Stream removed, broadcasting: 3\nI0120 12:57:52.428630    3247 log.go:172] (0xc0008a2210) (0xc00089c640) Stream removed, broadcasting: 5\n"
Jan 20 12:57:52.442: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan 20 12:57:52.442: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan 20 12:57:52.459: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 12:57:52.460: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 20 12:57:52.460: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jan 20 12:57:52.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 20 12:57:53.254: INFO: stderr: "I0120 12:57:52.883443    3269 log.go:172] (0xc0005d02c0) (0xc0002c0780) Create stream\nI0120 12:57:52.883879    3269 log.go:172] (0xc0005d02c0) (0xc0002c0780) Stream added, broadcasting: 1\nI0120 12:57:52.935860    3269 log.go:172] (0xc0005d02c0) Reply frame received for 1\nI0120 12:57:52.936238    3269 log.go:172] (0xc0005d02c0) (0xc000892000) Create stream\nI0120 12:57:52.936281    3269 log.go:172] (0xc0005d02c0) (0xc000892000) Stream added, broadcasting: 3\nI0120 12:57:52.940783    3269 log.go:172] (0xc0005d02c0) Reply frame received for 3\nI0120 12:57:52.940840    3269 log.go:172] (0xc0005d02c0) (0xc0005eec80) Create stream\nI0120 12:57:52.940858    3269 log.go:172] (0xc0005d02c0) (0xc0005eec80) Stream added, broadcasting: 5\nI0120 12:57:52.943684    3269 log.go:172] (0xc0005d02c0) Reply frame received for 5\nI0120 12:57:53.089369    3269 log.go:172] (0xc0005d02c0) Data frame received for 3\nI0120 12:57:53.089448    3269 log.go:172] (0xc000892000) (3) Data frame handling\nI0120 12:57:53.089464    3269 log.go:172] (0xc000892000) (3) Data frame sent\nI0120 12:57:53.245599    3269 log.go:172] (0xc0005d02c0) (0xc000892000) Stream removed, broadcasting: 3\nI0120 12:57:53.245796    3269 log.go:172] (0xc0005d02c0) Data frame received for 1\nI0120 12:57:53.245813    3269 log.go:172] (0xc0002c0780) (1) Data frame handling\nI0120 12:57:53.245843    3269 log.go:172] (0xc0002c0780) (1) Data frame sent\nI0120 12:57:53.245848    3269 log.go:172] (0xc0005d02c0) (0xc0002c0780) Stream removed, broadcasting: 1\nI0120 12:57:53.245974    3269 log.go:172] (0xc0005d02c0) (0xc0005eec80) Stream removed, broadcasting: 5\nI0120 12:57:53.246913    3269 log.go:172] (0xc0005d02c0) Go away received\nI0120 12:57:53.247198    3269 log.go:172] (0xc0005d02c0) (0xc0002c0780) Stream removed, broadcasting: 1\nI0120 12:57:53.247221    3269 log.go:172] (0xc0005d02c0) (0xc000892000) Stream removed, broadcasting: 3\nI0120 12:57:53.247229    3269 log.go:172] (0xc0005d02c0) (0xc0005eec80) Stream removed, broadcasting: 5\n"
Jan 20 12:57:53.255: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 20 12:57:53.255: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 20 12:57:53.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 20 12:57:54.231: INFO: stderr: "I0120 12:57:53.529336    3290 log.go:172] (0xc00070e2c0) (0xc00073c640) Create stream\nI0120 12:57:53.529781    3290 log.go:172] (0xc00070e2c0) (0xc00073c640) Stream added, broadcasting: 1\nI0120 12:57:53.539272    3290 log.go:172] (0xc00070e2c0) Reply frame received for 1\nI0120 12:57:53.539357    3290 log.go:172] (0xc00070e2c0) (0xc0000badc0) Create stream\nI0120 12:57:53.539369    3290 log.go:172] (0xc00070e2c0) (0xc0000badc0) Stream added, broadcasting: 3\nI0120 12:57:53.542113    3290 log.go:172] (0xc00070e2c0) Reply frame received for 3\nI0120 12:57:53.542183    3290 log.go:172] (0xc00070e2c0) (0xc0000baf00) Create stream\nI0120 12:57:53.542197    3290 log.go:172] (0xc00070e2c0) (0xc0000baf00) Stream added, broadcasting: 5\nI0120 12:57:53.547532    3290 log.go:172] (0xc00070e2c0) Reply frame received for 5\nI0120 12:57:53.803719    3290 log.go:172] (0xc00070e2c0) Data frame received for 3\nI0120 12:57:53.803917    3290 log.go:172] (0xc0000badc0) (3) Data frame handling\nI0120 12:57:53.803985    3290 log.go:172] (0xc0000badc0) (3) Data frame sent\nI0120 12:57:54.217022    3290 log.go:172] (0xc00070e2c0) Data frame received for 1\nI0120 12:57:54.217142    3290 log.go:172] (0xc00073c640) (1) Data frame handling\nI0120 12:57:54.217184    3290 log.go:172] (0xc00073c640) (1) Data frame sent\nI0120 12:57:54.218013    3290 log.go:172] (0xc00070e2c0) (0xc0000baf00) Stream removed, broadcasting: 5\nI0120 12:57:54.218245    3290 log.go:172] (0xc00070e2c0) (0xc0000badc0) Stream removed, broadcasting: 3\nI0120 12:57:54.218300    3290 log.go:172] (0xc00070e2c0) (0xc00073c640) Stream removed, broadcasting: 1\nI0120 12:57:54.218324    3290 log.go:172] (0xc00070e2c0) Go away received\nI0120 12:57:54.218885    3290 log.go:172] (0xc00070e2c0) (0xc00073c640) Stream removed, broadcasting: 1\nI0120 12:57:54.218895    3290 log.go:172] (0xc00070e2c0) (0xc0000badc0) Stream removed, broadcasting: 3\nI0120 12:57:54.218901    3290 log.go:172] (0xc00070e2c0) (0xc0000baf00) Stream removed, broadcasting: 5\n"
Jan 20 12:57:54.231: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 20 12:57:54.231: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 20 12:57:54.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan 20 12:57:54.755: INFO: stderr: "I0120 12:57:54.395785    3312 log.go:172] (0xc0007620b0) (0xc0006aa640) Create stream\nI0120 12:57:54.396131    3312 log.go:172] (0xc0007620b0) (0xc0006aa640) Stream added, broadcasting: 1\nI0120 12:57:54.400587    3312 log.go:172] (0xc0007620b0) Reply frame received for 1\nI0120 12:57:54.400739    3312 log.go:172] (0xc0007620b0) (0xc00048cc80) Create stream\nI0120 12:57:54.400755    3312 log.go:172] (0xc0007620b0) (0xc00048cc80) Stream added, broadcasting: 3\nI0120 12:57:54.401870    3312 log.go:172] (0xc0007620b0) Reply frame received for 3\nI0120 12:57:54.401896    3312 log.go:172] (0xc0007620b0) (0xc0004ce000) Create stream\nI0120 12:57:54.401906    3312 log.go:172] (0xc0007620b0) (0xc0004ce000) Stream added, broadcasting: 5\nI0120 12:57:54.402874    3312 log.go:172] (0xc0007620b0) Reply frame received for 5\nI0120 12:57:54.568048    3312 log.go:172] (0xc0007620b0) Data frame received for 3\nI0120 12:57:54.568150    3312 log.go:172] (0xc00048cc80) (3) Data frame handling\nI0120 12:57:54.568175    3312 log.go:172] (0xc00048cc80) (3) Data frame sent\nI0120 12:57:54.742197    3312 log.go:172] (0xc0007620b0) Data frame received for 1\nI0120 12:57:54.742289    3312 log.go:172] (0xc0006aa640) (1) Data frame handling\nI0120 12:57:54.742313    3312 log.go:172] (0xc0006aa640) (1) Data frame sent\nI0120 12:57:54.743095    3312 log.go:172] (0xc0007620b0) (0xc0004ce000) Stream removed, broadcasting: 5\nI0120 12:57:54.743349    3312 log.go:172] (0xc0007620b0) (0xc00048cc80) Stream removed, broadcasting: 3\nI0120 12:57:54.743551    3312 log.go:172] (0xc0007620b0) (0xc0006aa640) Stream removed, broadcasting: 1\nI0120 12:57:54.743629    3312 log.go:172] (0xc0007620b0) Go away received\nI0120 12:57:54.744344    3312 log.go:172] (0xc0007620b0) (0xc0006aa640) Stream removed, broadcasting: 1\nI0120 12:57:54.744391    3312 log.go:172] (0xc0007620b0) (0xc00048cc80) Stream removed, broadcasting: 3\nI0120 12:57:54.744409    3312 log.go:172] (0xc0007620b0) (0xc0004ce000) Stream removed, broadcasting: 5\n"
Jan 20 12:57:54.755: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan 20 12:57:54.755: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan 20 12:57:54.755: INFO: Waiting for statefulset status.replicas updated to 0
Jan 20 12:57:54.838: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Jan 20 12:58:04.881: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 20 12:58:04.881: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 20 12:58:04.881: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 20 12:58:04.928: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 20 12:58:04.929: INFO: ss-0  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:56:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:56:56 +0000 UTC  }]
Jan 20 12:58:04.929: INFO: ss-1  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  }]
Jan 20 12:58:04.929: INFO: ss-2  hunter-server-hu5at5svl7ps  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  }]
Jan 20 12:58:04.929: INFO: 
Jan 20 12:58:04.929: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 20 12:58:07.977: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 20 12:58:07.977: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:56:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:56:56 +0000 UTC  }]
Jan 20 12:58:07.977: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  }]
Jan 20 12:58:07.977: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  }]
Jan 20 12:58:07.977: INFO: 
Jan 20 12:58:07.977: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 20 12:58:09.392: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 20 12:58:09.392: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:56:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:56:56 +0000 UTC  }]
Jan 20 12:58:09.392: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  }]
Jan 20 12:58:09.392: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  }]
Jan 20 12:58:09.392: INFO: 
Jan 20 12:58:09.392: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 20 12:58:10.520: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 20 12:58:10.520: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:56:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:56:56 +0000 UTC  }]
Jan 20 12:58:10.520: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  }]
Jan 20 12:58:10.520: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  }]
Jan 20 12:58:10.520: INFO: 
Jan 20 12:58:10.520: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 20 12:58:12.886: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 20 12:58:12.886: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:56:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:56:56 +0000 UTC  }]
Jan 20 12:58:12.886: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  }]
Jan 20 12:58:12.886: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  }]
Jan 20 12:58:12.886: INFO: 
Jan 20 12:58:12.886: INFO: StatefulSet ss has not reached scale 0, at 3
Jan 20 12:58:14.599: INFO: POD   NODE                        PHASE    GRACE  CONDITIONS
Jan 20 12:58:14.599: INFO: ss-0  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:56:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:56:56 +0000 UTC  }]
Jan 20 12:58:14.599: INFO: ss-1  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  }]
Jan 20 12:58:14.599: INFO: ss-2  hunter-server-hu5at5svl7ps  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 12:57:27 +0000 UTC  }]
Jan 20 12:58:14.599: INFO: 
Jan 20 12:58:14.599: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-pp42n
Jan 20 12:58:16.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 12:58:16.612: INFO: rc: 1
Jan 20 12:58:16.612: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc001adecc0 exit status 1   true [0xc00217c5d0 0xc00217c5e8 0xc00217c600] [0xc00217c5d0 0xc00217c5e8 0xc00217c600] [0xc00217c5e0 0xc00217c5f8] [0x935700 0x935700] 0xc0011a9b00 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1

Jan 20 12:58:26.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 12:58:26.816: INFO: rc: 1
Jan 20 12:58:26.816: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f2d560 exit status 1   true [0xc00032f260 0xc00032f278 0xc00032f2b0] [0xc00032f260 0xc00032f278 0xc00032f2b0] [0xc00032f270 0xc00032f290] [0x935700 0x935700] 0xc00143f200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 12:58:36.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 12:58:36.989: INFO: rc: 1
Jan 20 12:58:36.990: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f2d680 exit status 1   true [0xc00032f2c8 0xc00032f320 0xc00032f378] [0xc00032f2c8 0xc00032f320 0xc00032f378] [0xc00032f300 0xc00032f360] [0x935700 0x935700] 0xc00143ff80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 12:58:46.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 12:58:47.483: INFO: rc: 1
Jan 20 12:58:47.483: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001adee10 exit status 1   true [0xc00217c608 0xc00217c620 0xc00217c638] [0xc00217c608 0xc00217c620 0xc00217c638] [0xc00217c618 0xc00217c630] [0x935700 0x935700] 0xc0011a9e00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 12:58:57.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 12:58:57.665: INFO: rc: 1
Jan 20 12:58:57.665: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0003a1f50 exit status 1   true [0xc00000e010 0xc00170e010 0xc00170e028] [0xc00000e010 0xc00170e010 0xc00170e028] [0xc00170e008 0xc00170e020] [0x935700 0x935700] 0xc00143f920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 12:59:07.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 12:59:07.794: INFO: rc: 1
Jan 20 12:59:07.794: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f40120 exit status 1   true [0xc001ed0000 0xc001ed0018 0xc001ed0030] [0xc001ed0000 0xc001ed0018 0xc001ed0030] [0xc001ed0010 0xc001ed0028] [0x935700 0x935700] 0xc0011c6240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 12:59:17.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 12:59:18.019: INFO: rc: 1
Jan 20 12:59:18.019: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f40270 exit status 1   true [0xc001ed0038 0xc001ed0050 0xc001ed0068] [0xc001ed0038 0xc001ed0050 0xc001ed0068] [0xc001ed0048 0xc001ed0060] [0x935700 0x935700] 0xc0011c6660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 12:59:28.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 12:59:28.202: INFO: rc: 1
Jan 20 12:59:28.202: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f40420 exit status 1   true [0xc001ed0070 0xc001ed0088 0xc001ed00a8] [0xc001ed0070 0xc001ed0088 0xc001ed00a8] [0xc001ed0080 0xc001ed00a0] [0x935700 0x935700] 0xc0011c6ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 12:59:38.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 12:59:38.347: INFO: rc: 1
Jan 20 12:59:38.347: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d00330 exit status 1   true [0xc00217c000 0xc00217c018 0xc00217c030] [0xc00217c000 0xc00217c018 0xc00217c030] [0xc00217c010 0xc00217c028] [0x935700 0x935700] 0xc00147b920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 12:59:48.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 12:59:48.545: INFO: rc: 1
Jan 20 12:59:48.545: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d00450 exit status 1   true [0xc00217c038 0xc00217c050 0xc00217c068] [0xc00217c038 0xc00217c050 0xc00217c068] [0xc00217c048 0xc00217c060] [0x935700 0x935700] 0xc001dd63c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 12:59:58.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 12:59:58.719: INFO: rc: 1
Jan 20 12:59:58.719: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f405a0 exit status 1   true [0xc001ed00b0 0xc001ed00c8 0xc001ed00e0] [0xc001ed00b0 0xc001ed00c8 0xc001ed00e0] [0xc001ed00c0 0xc001ed00d8] [0x935700 0x935700] 0xc0011c6e40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 13:00:08.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 13:00:08.903: INFO: rc: 1
Jan 20 13:00:08.903: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00141a1b0 exit status 1   true [0xc00032ebc0 0xc00032ebf0 0xc00032ec18] [0xc00032ebc0 0xc00032ebf0 0xc00032ec18] [0xc00032ebe8 0xc00032ec10] [0x935700 0x935700] 0xc0016307e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 13:00:18.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 13:00:19.068: INFO: rc: 1
Jan 20 13:00:19.069: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f40750 exit status 1   true [0xc001ed00e8 0xc001ed0100 0xc001ed0118] [0xc001ed00e8 0xc001ed0100 0xc001ed0118] [0xc001ed00f8 0xc001ed0110] [0x935700 0x935700] 0xc0011c72c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 13:00:29.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 13:00:29.198: INFO: rc: 1
Jan 20 13:00:29.198: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c24090 exit status 1   true [0xc00170e030 0xc00170e048 0xc00170e060] [0xc00170e030 0xc00170e048 0xc00170e060] [0xc00170e040 0xc00170e058] [0x935700 0x935700] 0xc0017dca80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 13:00:39.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 13:00:39.375: INFO: rc: 1
Jan 20 13:00:39.376: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d00690 exit status 1   true [0xc00217c070 0xc00217c088 0xc00217c0a0] [0xc00217c070 0xc00217c088 0xc00217c0a0] [0xc00217c080 0xc00217c098] [0x935700 0x935700] 0xc001dd6780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 13:00:49.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 13:00:49.554: INFO: rc: 1
Jan 20 13:00:49.554: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00141a360 exit status 1   true [0xc00032ec30 0xc00032ec80 0xc00032eca8] [0xc00032ec30 0xc00032ec80 0xc00032eca8] [0xc00032ec68 0xc00032eca0] [0x935700 0x935700] 0xc00187c960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 13:00:59.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 13:00:59.739: INFO: rc: 1
Jan 20 13:00:59.740: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c246c0 exit status 1   true [0xc00170e070 0xc00170e088 0xc00170e0a0] [0xc00170e070 0xc00170e088 0xc00170e0a0] [0xc00170e080 0xc00170e098] [0x935700 0x935700] 0xc0017dd740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 13:01:09.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 13:01:09.928: INFO: rc: 1
Jan 20 13:01:09.928: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0003a1f80 exit status 1   true [0xc00000e010 0xc001ed0008 0xc001ed0020] [0xc00000e010 0xc001ed0008 0xc001ed0020] [0xc001ed0000 0xc001ed0018] [0x935700 0x935700] 0xc0016307e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 13:01:19.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 13:01:20.104: INFO: rc: 1
Jan 20 13:01:20.104: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f40150 exit status 1   true [0xc00032ebc0 0xc00032ebf0 0xc00032ec18] [0xc00032ebc0 0xc00032ebf0 0xc00032ec18] [0xc00032ebe8 0xc00032ec10] [0x935700 0x935700] 0xc00147b920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 13:01:30.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 13:01:30.316: INFO: rc: 1
Jan 20 13:01:30.316: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00141a150 exit status 1   true [0xc001ed0028 0xc001ed0040 0xc001ed0058] [0xc001ed0028 0xc001ed0040 0xc001ed0058] [0xc001ed0038 0xc001ed0050] [0x935700 0x935700] 0xc0011c6240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 13:01:40.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 13:01:40.514: INFO: rc: 1
Jan 20 13:01:40.514: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00141a330 exit status 1   true [0xc001ed0060 0xc001ed0078 0xc001ed0098] [0xc001ed0060 0xc001ed0078 0xc001ed0098] [0xc001ed0070 0xc001ed0088] [0x935700 0x935700] 0xc0011c6660 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 13:01:50.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 13:01:50.719: INFO: rc: 1
Jan 20 13:01:50.719: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f40330 exit status 1   true [0xc00032ec30 0xc00032ec80 0xc00032eca8] [0xc00032ec30 0xc00032ec80 0xc00032eca8] [0xc00032ec68 0xc00032eca0] [0x935700 0x935700] 0xc00143fb60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 13:02:00.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 13:02:00.909: INFO: rc: 1
Jan 20 13:02:00.909: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d00300 exit status 1   true [0xc00170e000 0xc00170e018 0xc00170e030] [0xc00170e000 0xc00170e018 0xc00170e030] [0xc00170e010 0xc00170e028] [0x935700 0x935700] 0xc00187c960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 13:02:10.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 13:02:11.068: INFO: rc: 1
Jan 20 13:02:11.069: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d004b0 exit status 1   true [0xc00170e038 0xc00170e050 0xc00170e0a8] [0xc00170e038 0xc00170e050 0xc00170e0a8] [0xc00170e048 0xc00170e060] [0x935700 0x935700] 0xc0017dc900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 13:02:21.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 13:02:21.248: INFO: rc: 1
Jan 20 13:02:21.249: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c24750 exit status 1   true [0xc00217c000 0xc00217c018 0xc00217c030] [0xc00217c000 0xc00217c018 0xc00217c030] [0xc00217c010 0xc00217c028] [0x935700 0x935700] 0xc001dd6360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 13:02:31.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 13:02:31.381: INFO: rc: 1
Jan 20 13:02:31.381: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001f405d0 exit status 1   true [0xc00032ecb0 0xc00032ecc8 0xc00032ecf0] [0xc00032ecb0 0xc00032ecc8 0xc00032ecf0] [0xc00032ecc0 0xc00032ecd8] [0x935700 0x935700] 0xc0018e4540 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 13:02:41.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 13:02:41.554: INFO: rc: 1
Jan 20 13:02:41.554: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c248d0 exit status 1   true [0xc00217c038 0xc00217c050 0xc00217c068] [0xc00217c038 0xc00217c050 0xc00217c068] [0xc00217c048 0xc00217c060] [0x935700 0x935700] 0xc001dd6720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 13:02:51.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 13:02:51.743: INFO: rc: 1
Jan 20 13:02:51.743: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001c249f0 exit status 1   true [0xc00217c070 0xc00217c088 0xc00217c0a0] [0xc00217c070 0xc00217c088 0xc00217c0a0] [0xc00217c080 0xc00217c098] [0x935700 0x935700] 0xc001dd6c60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 13:03:01.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 13:03:01.919: INFO: rc: 1
Jan 20 13:03:01.919: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0003a1f50 exit status 1   true [0xc00000e010 0xc00217c010 0xc00217c028] [0xc00000e010 0xc00217c010 0xc00217c028] [0xc00217c008 0xc00217c020] [0x935700 0x935700] 0xc00143f920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 13:03:11.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 13:03:12.129: INFO: rc: 1
Jan 20 13:03:12.129: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc00141a120 exit status 1   true [0xc00217c030 0xc00217c048 0xc00217c060] [0xc00217c030 0xc00217c048 0xc00217c060] [0xc00217c040 0xc00217c058] [0x935700 0x935700] 0xc00147b140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1

Jan 20 13:03:22.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-pp42n ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan 20 13:03:22.349: INFO: rc: 1
Jan 20 13:03:22.349: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Jan 20 13:03:22.349: INFO: Scaling statefulset ss to 0
Jan 20 13:03:22.370: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jan 20 13:03:22.375: INFO: Deleting all statefulset in ns e2e-tests-statefulset-pp42n
Jan 20 13:03:22.379: INFO: Scaling statefulset ss to 0
Jan 20 13:03:22.391: INFO: Waiting for statefulset status.replicas updated to 0
Jan 20 13:03:22.396: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 13:03:22.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-pp42n" for this suite.
Jan 20 13:03:30.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:03:30.590: INFO: namespace: e2e-tests-statefulset-pp42n, resource: bindings, ignored listing per whitelist
Jan 20 13:03:30.881: INFO: namespace e2e-tests-statefulset-pp42n deletion completed in 8.445258733s

• [SLOW TEST:394.951 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 13:03:30.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jan 20 13:03:31.398: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41f74b18-3b85-11ea-8bde-0242ac110005" in namespace "e2e-tests-downward-api-l554t" to be "success or failure"
Jan 20 13:03:31.405: INFO: Pod "downwardapi-volume-41f74b18-3b85-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 7.500929ms
Jan 20 13:03:33.425: INFO: Pod "downwardapi-volume-41f74b18-3b85-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027095884s
Jan 20 13:03:35.464: INFO: Pod "downwardapi-volume-41f74b18-3b85-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06645886s
Jan 20 13:03:37.501: INFO: Pod "downwardapi-volume-41f74b18-3b85-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103757538s
Jan 20 13:03:39.512: INFO: Pod "downwardapi-volume-41f74b18-3b85-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.114684408s
Jan 20 13:03:41.567: INFO: Pod "downwardapi-volume-41f74b18-3b85-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.169421583s
Jan 20 13:03:43.579: INFO: Pod "downwardapi-volume-41f74b18-3b85-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.18128405s
STEP: Saw pod success
Jan 20 13:03:43.579: INFO: Pod "downwardapi-volume-41f74b18-3b85-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 13:03:43.584: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod downwardapi-volume-41f74b18-3b85-11ea-8bde-0242ac110005 container client-container: 
STEP: delete the pod
Jan 20 13:03:44.981: INFO: Waiting for pod downwardapi-volume-41f74b18-3b85-11ea-8bde-0242ac110005 to disappear
Jan 20 13:03:45.537: INFO: Pod downwardapi-volume-41f74b18-3b85-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 13:03:45.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-l554t" for this suite.
Jan 20 13:03:51.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:03:52.338: INFO: namespace: e2e-tests-downward-api-l554t, resource: bindings, ignored listing per whitelist
Jan 20 13:03:52.351: INFO: namespace e2e-tests-downward-api-l554t deletion completed in 6.74678463s

• [SLOW TEST:21.470 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 13:03:52.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-4edc262e-3b85-11ea-8bde-0242ac110005
STEP: Creating a pod to test consume configMaps
Jan 20 13:03:52.815: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4edd4690-3b85-11ea-8bde-0242ac110005" in namespace "e2e-tests-projected-6gl8d" to be "success or failure"
Jan 20 13:03:52.836: INFO: Pod "pod-projected-configmaps-4edd4690-3b85-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 20.666168ms
Jan 20 13:03:54.868: INFO: Pod "pod-projected-configmaps-4edd4690-3b85-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052329784s
Jan 20 13:03:56.897: INFO: Pod "pod-projected-configmaps-4edd4690-3b85-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081680363s
Jan 20 13:03:59.093: INFO: Pod "pod-projected-configmaps-4edd4690-3b85-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 6.277942545s
Jan 20 13:04:01.110: INFO: Pod "pod-projected-configmaps-4edd4690-3b85-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 8.294535372s
Jan 20 13:04:03.243: INFO: Pod "pod-projected-configmaps-4edd4690-3b85-11ea-8bde-0242ac110005": Phase="Pending", Reason="", readiness=false. Elapsed: 10.427548486s
Jan 20 13:04:05.924: INFO: Pod "pod-projected-configmaps-4edd4690-3b85-11ea-8bde-0242ac110005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.108566051s
STEP: Saw pod success
Jan 20 13:04:05.924: INFO: Pod "pod-projected-configmaps-4edd4690-3b85-11ea-8bde-0242ac110005" satisfied condition "success or failure"
Jan 20 13:04:05.941: INFO: Trying to get logs from node hunter-server-hu5at5svl7ps pod pod-projected-configmaps-4edd4690-3b85-11ea-8bde-0242ac110005 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 20 13:04:06.585: INFO: Waiting for pod pod-projected-configmaps-4edd4690-3b85-11ea-8bde-0242ac110005 to disappear
Jan 20 13:04:06.601: INFO: Pod pod-projected-configmaps-4edd4690-3b85-11ea-8bde-0242ac110005 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 13:04:06.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-6gl8d" for this suite.
Jan 20 13:04:14.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:04:14.705: INFO: namespace: e2e-tests-projected-6gl8d, resource: bindings, ignored listing per whitelist
Jan 20 13:04:14.858: INFO: namespace e2e-tests-projected-6gl8d deletion completed in 8.243089707s

• [SLOW TEST:22.506 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 13:04:14.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-w8xzw
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 20 13:04:15.276: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 20 13:04:57.636: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.32.0.5:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-w8xzw PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 13:04:57.636: INFO: >>> kubeConfig: /root/.kube/config
I0120 13:04:57.717979       8 log.go:172] (0xc00093f6b0) (0xc0009940a0) Create stream
I0120 13:04:57.718023       8 log.go:172] (0xc00093f6b0) (0xc0009940a0) Stream added, broadcasting: 1
I0120 13:04:57.723667       8 log.go:172] (0xc00093f6b0) Reply frame received for 1
I0120 13:04:57.723710       8 log.go:172] (0xc00093f6b0) (0xc000994140) Create stream
I0120 13:04:57.723732       8 log.go:172] (0xc00093f6b0) (0xc000994140) Stream added, broadcasting: 3
I0120 13:04:57.725039       8 log.go:172] (0xc00093f6b0) Reply frame received for 3
I0120 13:04:57.725076       8 log.go:172] (0xc00093f6b0) (0xc0021655e0) Create stream
I0120 13:04:57.725089       8 log.go:172] (0xc00093f6b0) (0xc0021655e0) Stream added, broadcasting: 5
I0120 13:04:57.727164       8 log.go:172] (0xc00093f6b0) Reply frame received for 5
I0120 13:04:58.231182       8 log.go:172] (0xc00093f6b0) Data frame received for 3
I0120 13:04:58.231245       8 log.go:172] (0xc000994140) (3) Data frame handling
I0120 13:04:58.231274       8 log.go:172] (0xc000994140) (3) Data frame sent
I0120 13:04:58.578660       8 log.go:172] (0xc00093f6b0) (0xc000994140) Stream removed, broadcasting: 3
I0120 13:04:58.578744       8 log.go:172] (0xc00093f6b0) Data frame received for 1
I0120 13:04:58.578772       8 log.go:172] (0xc0009940a0) (1) Data frame handling
I0120 13:04:58.578789       8 log.go:172] (0xc00093f6b0) (0xc0021655e0) Stream removed, broadcasting: 5
I0120 13:04:58.578821       8 log.go:172] (0xc0009940a0) (1) Data frame sent
I0120 13:04:58.578871       8 log.go:172] (0xc00093f6b0) (0xc0009940a0) Stream removed, broadcasting: 1
I0120 13:04:58.578882       8 log.go:172] (0xc00093f6b0) Go away received
I0120 13:04:58.579073       8 log.go:172] (0xc00093f6b0) (0xc0009940a0) Stream removed, broadcasting: 1
I0120 13:04:58.579272       8 log.go:172] (0xc00093f6b0) (0xc000994140) Stream removed, broadcasting: 3
I0120 13:04:58.579289       8 log.go:172] (0xc00093f6b0) (0xc0021655e0) Stream removed, broadcasting: 5
Jan 20 13:04:58.579: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 13:04:58.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-w8xzw" for this suite.
Jan 20 13:05:22.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 13:05:22.754: INFO: namespace: e2e-tests-pod-network-test-w8xzw, resource: bindings, ignored listing per whitelist
Jan 20 13:05:22.778: INFO: namespace e2e-tests-pod-network-test-w8xzw deletion completed in 24.172025392s

• [SLOW TEST:67.919 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jan 20 13:05:22.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 20 13:05:55.188: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-629xq PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 13:05:55.188: INFO: >>> kubeConfig: /root/.kube/config
I0120 13:05:55.255572       8 log.go:172] (0xc0012a4370) (0xc000f98dc0) Create stream
I0120 13:05:55.255679       8 log.go:172] (0xc0012a4370) (0xc000f98dc0) Stream added, broadcasting: 1
I0120 13:05:55.264424       8 log.go:172] (0xc0012a4370) Reply frame received for 1
I0120 13:05:55.264490       8 log.go:172] (0xc0012a4370) (0xc000fa3540) Create stream
I0120 13:05:55.264515       8 log.go:172] (0xc0012a4370) (0xc000fa3540) Stream added, broadcasting: 3
I0120 13:05:55.266672       8 log.go:172] (0xc0012a4370) Reply frame received for 3
I0120 13:05:55.266736       8 log.go:172] (0xc0012a4370) (0xc000fa35e0) Create stream
I0120 13:05:55.266751       8 log.go:172] (0xc0012a4370) (0xc000fa35e0) Stream added, broadcasting: 5
I0120 13:05:55.267991       8 log.go:172] (0xc0012a4370) Reply frame received for 5
I0120 13:05:55.434074       8 log.go:172] (0xc0012a4370) Data frame received for 3
I0120 13:05:55.434126       8 log.go:172] (0xc000fa3540) (3) Data frame handling
I0120 13:05:55.434150       8 log.go:172] (0xc000fa3540) (3) Data frame sent
I0120 13:05:55.703691       8 log.go:172] (0xc0012a4370) (0xc000fa3540) Stream removed, broadcasting: 3
I0120 13:05:55.703811       8 log.go:172] (0xc0012a4370) (0xc000fa35e0) Stream removed, broadcasting: 5
I0120 13:05:55.703885       8 log.go:172] (0xc0012a4370) Data frame received for 1
I0120 13:05:55.703942       8 log.go:172] (0xc000f98dc0) (1) Data frame handling
I0120 13:05:55.703979       8 log.go:172] (0xc000f98dc0) (1) Data frame sent
I0120 13:05:55.704000       8 log.go:172] (0xc0012a4370) (0xc000f98dc0) Stream removed, broadcasting: 1
I0120 13:05:55.704352       8 log.go:172] (0xc0012a4370) (0xc000f98dc0) Stream removed, broadcasting: 1
I0120 13:05:55.704371       8 log.go:172] (0xc0012a4370) (0xc000fa3540) Stream removed, broadcasting: 3
I0120 13:05:55.704546       8 log.go:172] (0xc0012a4370) (0xc000fa35e0) Stream removed, broadcasting: 5
Jan 20 13:05:55.704: INFO: Exec stderr: ""
I0120 13:05:55.704653       8 log.go:172] (0xc0012a4370) Go away received
Jan 20 13:05:55.704: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-629xq PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 13:05:55.704: INFO: >>> kubeConfig: /root/.kube/config
I0120 13:05:55.808073       8 log.go:172] (0xc0013f8420) (0xc001e32500) Create stream
I0120 13:05:55.808169       8 log.go:172] (0xc0013f8420) (0xc001e32500) Stream added, broadcasting: 1
I0120 13:05:55.814518       8 log.go:172] (0xc0013f8420) Reply frame received for 1
I0120 13:05:55.814583       8 log.go:172] (0xc0013f8420) (0xc000f98e60) Create stream
I0120 13:05:55.814594       8 log.go:172] (0xc0013f8420) (0xc000f98e60) Stream added, broadcasting: 3
I0120 13:05:55.816305       8 log.go:172] (0xc0013f8420) Reply frame received for 3
I0120 13:05:55.816340       8 log.go:172] (0xc0013f8420) (0xc001984a00) Create stream
I0120 13:05:55.816356       8 log.go:172] (0xc0013f8420) (0xc001984a00) Stream added, broadcasting: 5
I0120 13:05:55.818150       8 log.go:172] (0xc0013f8420) Reply frame received for 5
I0120 13:05:55.980173       8 log.go:172] (0xc0013f8420) Data frame received for 3
I0120 13:05:55.980232       8 log.go:172] (0xc000f98e60) (3) Data frame handling
I0120 13:05:55.980257       8 log.go:172] (0xc000f98e60) (3) Data frame sent
I0120 13:05:56.124777       8 log.go:172] (0xc0013f8420) Data frame received for 1
I0120 13:05:56.124900       8 log.go:172] (0xc001e32500) (1) Data frame handling
I0120 13:05:56.124933       8 log.go:172] (0xc001e32500) (1) Data frame sent
I0120 13:05:56.124959       8 log.go:172] (0xc0013f8420) (0xc001e32500) Stream removed, broadcasting: 1
I0120 13:05:56.126167       8 log.go:172] (0xc0013f8420) (0xc001984a00) Stream removed, broadcasting: 5
I0120 13:05:56.126266       8 log.go:172] (0xc0013f8420) (0xc000f98e60) Stream removed, broadcasting: 3
I0120 13:05:56.126344       8 log.go:172] (0xc0013f8420) (0xc001e32500) Stream removed, broadcasting: 1
I0120 13:05:56.127353       8 log.go:172] (0xc0013f8420) (0xc000f98e60) Stream removed, broadcasting: 3
I0120 13:05:56.127360       8 log.go:172] (0xc0013f8420) (0xc001984a00) Stream removed, broadcasting: 5
Jan 20 13:05:56.127: INFO: Exec stderr: ""
Jan 20 13:05:56.127: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-629xq PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 13:05:56.128: INFO: >>> kubeConfig: /root/.kube/config
I0120 13:05:56.230003       8 log.go:172] (0xc00093f290) (0xc000fa3900) Create stream
I0120 13:05:56.230059       8 log.go:172] (0xc00093f290) (0xc000fa3900) Stream added, broadcasting: 1
I0120 13:05:56.234103       8 log.go:172] (0xc00093f290) Reply frame received for 1
I0120 13:05:56.234147       8 log.go:172] (0xc00093f290) (0xc000e6dae0) Create stream
I0120 13:05:56.234157       8 log.go:172] (0xc00093f290) (0xc000e6dae0) Stream added, broadcasting: 3
I0120 13:05:56.235476       8 log.go:172] (0xc00093f290) Reply frame received for 3
I0120 13:05:56.235502       8 log.go:172] (0xc00093f290) (0xc000e6db80) Create stream
I0120 13:05:56.235510       8 log.go:172] (0xc00093f290) (0xc000e6db80) Stream added, broadcasting: 5
I0120 13:05:56.237502       8 log.go:172] (0xc00093f290) Reply frame received for 5
I0120 13:05:56.346867       8 log.go:172] (0xc00093f290) Data frame received for 3
I0120 13:05:56.346956       8 log.go:172] (0xc000e6dae0) (3) Data frame handling
I0120 13:05:56.347001       8 log.go:172] (0xc000e6dae0) (3) Data frame sent
I0120 17:05:56.359812       8 log.go:172] (0xc00093f290) (0xc000e6db80) Stream removed, broadcasting: 5
I0120 17:05:56.360014       8 log.go:172] (0xc00093f290) (0xc000fa3900) Stream removed, broadcasting: 1
I0120 17:05:56.360098       8 log.go:172] (0xc00093f290) (0xc000e6dae0) Stream removed, broadcasting: 3
I0120 17:05:56.360125       8 log.go:172] (0xc00093f290) Go away received
I0120 17:05:56.360335       8 log.go:172] (0xc00093f290) (0xc000fa3900) Stream removed, broadcasting: 1
I0120 17:05:56.360379       8 log.go:172] (0xc00093f290) (0xc000e6dae0) Stream removed, broadcasting: 3
I0120 17:05:56.360397       8 log.go:172] (0xc00093f290) (0xc000e6db80) Stream removed, broadcasting: 5
Jan 20 17:05:56.360: INFO: Exec stderr: ""
Jan 20 17:05:56.360: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-629xq PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 17:05:56.360: INFO: >>> kubeConfig: /root/.kube/config
I0120 17:05:56.531806       8 log.go:172] (0xc0012a4370) (0xc0019461e0) Create stream
I0120 17:05:56.532050       8 log.go:172] (0xc0012a4370) (0xc0019461e0) Stream added, broadcasting: 1
I0120 17:05:56.545364       8 log.go:172] (0xc0012a4370) Reply frame received for 1
I0120 17:05:56.545429       8 log.go:172] (0xc0012a4370) (0xc0019640a0) Create stream
I0120 17:05:56.545445       8 log.go:172] (0xc0012a4370) (0xc0019640a0) Stream added, broadcasting: 3
I0120 17:05:56.547656       8 log.go:172] (0xc0012a4370) Reply frame received for 3
I0120 17:05:56.547689       8 log.go:172] (0xc0012a4370) (0xc001964140) Create stream
I0120 17:05:56.547714       8 log.go:172] (0xc0012a4370) (0xc001964140) Stream added, broadcasting: 5
I0120 17:05:56.550371       8 log.go:172] (0xc0012a4370) Reply frame received for 5
I0120 17:05:56.759444       8 log.go:172] (0xc0012a4370) Data frame received for 3
I0120 17:05:56.759534       8 log.go:172] (0xc0019640a0) (3) Data frame handling
I0120 17:05:56.759559       8 log.go:172] (0xc0019640a0) (3) Data frame sent
I0120 17:05:57.001590       8 log.go:172] (0xc0012a4370) Data frame received for 1
I0120 17:05:57.001663       8 log.go:172] (0xc0012a4370) (0xc0019640a0) Stream removed, broadcasting: 3
I0120 17:05:57.001766       8 log.go:172] (0xc0012a4370) (0xc001964140) Stream removed, broadcasting: 5
I0120 17:05:57.001810       8 log.go:172] (0xc0019461e0) (1) Data frame handling
I0120 17:05:57.001832       8 log.go:172] (0xc0019461e0) (1) Data frame sent
I0120 17:05:57.001839       8 log.go:172] (0xc0012a4370) (0xc0019461e0) Stream removed, broadcasting: 1
I0120 17:05:57.001871       8 log.go:172] (0xc0012a4370) Go away received
I0120 17:05:57.002256       8 log.go:172] (0xc0012a4370) (0xc0019461e0) Stream removed, broadcasting: 1
I0120 17:05:57.002275       8 log.go:172] (0xc0012a4370) (0xc0019640a0) Stream removed, broadcasting: 3
I0120 17:05:57.002288       8 log.go:172] (0xc0012a4370) (0xc001964140) Stream removed, broadcasting: 5
Jan 20 17:05:57.002: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 20 17:05:57.002: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-629xq PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 17:05:57.002: INFO: >>> kubeConfig: /root/.kube/config
I0120 17:05:57.161623       8 log.go:172] (0xc00093f080) (0xc001640280) Create stream
I0120 17:05:57.161829       8 log.go:172] (0xc00093f080) (0xc001640280) Stream added, broadcasting: 1
I0120 17:05:57.168096       8 log.go:172] (0xc00093f080) Reply frame received for 1
I0120 17:05:57.168172       8 log.go:172] (0xc00093f080) (0xc001964280) Create stream
I0120 17:05:57.168181       8 log.go:172] (0xc00093f080) (0xc001964280) Stream added, broadcasting: 3
I0120 17:05:57.169561       8 log.go:172] (0xc00093f080) Reply frame received for 3
I0120 17:05:57.169595       8 log.go:172] (0xc00093f080) (0xc001946280) Create stream
I0120 17:05:57.169613       8 log.go:172] (0xc00093f080) (0xc001946280) Stream added, broadcasting: 5
I0120 17:05:57.170709       8 log.go:172] (0xc00093f080) Reply frame received for 5
I0120 17:05:57.303379       8 log.go:172] (0xc00093f080) Data frame received for 3
I0120 17:05:57.303523       8 log.go:172] (0xc001964280) (3) Data frame handling
I0120 17:05:57.303561       8 log.go:172] (0xc001964280) (3) Data frame sent
I0120 17:05:57.435815       8 log.go:172] (0xc00093f080) Data frame received for 1
I0120 17:05:57.436015       8 log.go:172] (0xc00093f080) (0xc001964280) Stream removed, broadcasting: 3
I0120 17:05:57.436258       8 log.go:172] (0xc001640280) (1) Data frame handling
I0120 17:05:57.436341       8 log.go:172] (0xc001640280) (1) Data frame sent
I0120 17:05:57.436401       8 log.go:172] (0xc00093f080) (0xc001946280) Stream removed, broadcasting: 5
I0120 17:05:57.436445       8 log.go:172] (0xc00093f080) (0xc001640280) Stream removed, broadcasting: 1
I0120 17:05:57.436468       8 log.go:172] (0xc00093f080) Go away received
I0120 17:05:57.436662       8 log.go:172] (0xc00093f080) (0xc001640280) Stream removed, broadcasting: 1
I0120 17:05:57.436671       8 log.go:172] (0xc00093f080) (0xc001964280) Stream removed, broadcasting: 3
I0120 17:05:57.436678       8 log.go:172] (0xc00093f080) (0xc001946280) Stream removed, broadcasting: 5
Jan 20 17:05:57.436: INFO: Exec stderr: ""
Jan 20 17:05:57.436: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-629xq PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 17:05:57.436: INFO: >>> kubeConfig: /root/.kube/config
I0120 17:05:57.609951       8 log.go:172] (0xc0013f8420) (0xc0005ca500) Create stream
I0120 17:05:57.610021       8 log.go:172] (0xc0013f8420) (0xc0005ca500) Stream added, broadcasting: 1
I0120 17:05:57.615007       8 log.go:172] (0xc0013f8420) Reply frame received for 1
I0120 17:05:57.615109       8 log.go:172] (0xc0013f8420) (0xc000e7a000) Create stream
I0120 17:05:57.615121       8 log.go:172] (0xc0013f8420) (0xc000e7a000) Stream added, broadcasting: 3
I0120 17:05:57.616439       8 log.go:172] (0xc0013f8420) Reply frame received for 3
I0120 17:05:57.616461       8 log.go:172] (0xc0013f8420) (0xc000e7a320) Create stream
I0120 17:05:57.616469       8 log.go:172] (0xc0013f8420) (0xc000e7a320) Stream added, broadcasting: 5
I0120 17:05:57.617316       8 log.go:172] (0xc0013f8420) Reply frame received for 5
I0120 17:05:57.744554       8 log.go:172] (0xc0013f8420) Data frame received for 3
I0120 17:05:57.744609       8 log.go:172] (0xc000e7a000) (3) Data frame handling
I0120 17:05:57.744625       8 log.go:172] (0xc000e7a000) (3) Data frame sent
I0120 17:05:57.880057       8 log.go:172] (0xc0013f8420) (0xc000e7a000) Stream removed, broadcasting: 3
I0120 17:05:57.880420       8 log.go:172] (0xc0013f8420) Data frame received for 1
I0120 17:05:57.880502       8 log.go:172] (0xc0005ca500) (1) Data frame handling
I0120 17:05:57.880523       8 log.go:172] (0xc0013f8420) (0xc000e7a320) Stream removed, broadcasting: 5
I0120 17:05:57.880542       8 log.go:172] (0xc0005ca500) (1) Data frame sent
I0120 17:05:57.880569       8 log.go:172] (0xc0013f8420) (0xc0005ca500) Stream removed, broadcasting: 1
I0120 17:05:57.880657       8 log.go:172] (0xc0013f8420) Go away received
I0120 17:05:57.880878       8 log.go:172] (0xc0013f8420) (0xc0005ca500) Stream removed, broadcasting: 1
I0120 17:05:57.880887       8 log.go:172] (0xc0013f8420) (0xc000e7a000) Stream removed, broadcasting: 3
I0120 17:05:57.880898       8 log.go:172] (0xc0013f8420) (0xc000e7a320) Stream removed, broadcasting: 5
Jan 20 17:05:57.880: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 20 17:05:57.881: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-629xq PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 17:05:57.881: INFO: >>> kubeConfig: /root/.kube/config
I0120 17:05:57.998283       8 log.go:172] (0xc0025a42c0) (0xc001964820) Create stream
I0120 17:05:57.998354       8 log.go:172] (0xc0025a42c0) (0xc001964820) Stream added, broadcasting: 1
I0120 17:05:58.002966       8 log.go:172] (0xc0025a42c0) Reply frame received for 1
I0120 17:05:58.002997       8 log.go:172] (0xc0025a42c0) (0xc0003be0a0) Create stream
I0120 17:05:58.003008       8 log.go:172] (0xc0025a42c0) (0xc0003be0a0) Stream added, broadcasting: 3
I0120 17:05:58.003653       8 log.go:172] (0xc0025a42c0) Reply frame received for 3
I0120 17:05:58.003667       8 log.go:172] (0xc0025a42c0) (0xc001640320) Create stream
I0120 17:05:58.003674       8 log.go:172] (0xc0025a42c0) (0xc001640320) Stream added, broadcasting: 5
I0120 17:05:58.004332       8 log.go:172] (0xc0025a42c0) Reply frame received for 5
I0120 17:05:58.105161       8 log.go:172] (0xc0025a42c0) Data frame received for 3
I0120 17:05:58.105182       8 log.go:172] (0xc0003be0a0) (3) Data frame handling
I0120 17:05:58.105199       8 log.go:172] (0xc0003be0a0) (3) Data frame sent
I0120 17:05:58.201696       8 log.go:172] (0xc0025a42c0) Data frame received for 1
I0120 17:05:58.201745       8 log.go:172] (0xc0025a42c0) (0xc0003be0a0) Stream removed, broadcasting: 3
I0120 17:05:58.201820       8 log.go:172] (0xc001964820) (1) Data frame handling
I0120 17:05:58.201847       8 log.go:172] (0xc001964820) (1) Data frame sent
I0120 17:05:58.201873       8 log.go:172] (0xc0025a42c0) (0xc001640320) Stream removed, broadcasting: 5
I0120 17:05:58.201917       8 log.go:172] (0xc0025a42c0) (0xc001964820) Stream removed, broadcasting: 1
I0120 17:05:58.201952       8 log.go:172] (0xc0025a42c0) Go away received
I0120 17:05:58.202102       8 log.go:172] (0xc0025a42c0) (0xc001964820) Stream removed, broadcasting: 1
I0120 17:05:58.202129       8 log.go:172] (0xc0025a42c0) (0xc0003be0a0) Stream removed, broadcasting: 3
I0120 17:05:58.202150       8 log.go:172] (0xc0025a42c0) (0xc001640320) Stream removed, broadcasting: 5
Jan 20 17:05:58.202: INFO: Exec stderr: ""
Jan 20 17:05:58.202: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-629xq PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 17:05:58.202: INFO: >>> kubeConfig: /root/.kube/config
I0120 17:05:58.263243       8 log.go:172] (0xc0025a4790) (0xc001964aa0) Create stream
I0120 17:05:58.263303       8 log.go:172] (0xc0025a4790) (0xc001964aa0) Stream added, broadcasting: 1
I0120 17:05:58.267361       8 log.go:172] (0xc0025a4790) Reply frame received for 1
I0120 17:05:58.267406       8 log.go:172] (0xc0025a4790) (0xc0016406e0) Create stream
I0120 17:05:58.267424       8 log.go:172] (0xc0025a4790) (0xc0016406e0) Stream added, broadcasting: 3
I0120 17:05:58.268947       8 log.go:172] (0xc0025a4790) Reply frame received for 3
I0120 17:05:58.268987       8 log.go:172] (0xc0025a4790) (0xc0003be820) Create stream
I0120 17:05:58.269003       8 log.go:172] (0xc0025a4790) (0xc0003be820) Stream added, broadcasting: 5
I0120 17:05:58.270286       8 log.go:172] (0xc0025a4790) Reply frame received for 5
I0120 17:05:58.411950       8 log.go:172] (0xc0025a4790) Data frame received for 3
I0120 17:05:58.412003       8 log.go:172] (0xc0016406e0) (3) Data frame handling
I0120 17:05:58.412040       8 log.go:172] (0xc0016406e0) (3) Data frame sent
I0120 17:05:58.603648       8 log.go:172] (0xc0025a4790) Data frame received for 1
I0120 17:05:58.603777       8 log.go:172] (0xc0025a4790) (0xc0003be820) Stream removed, broadcasting: 5
I0120 17:05:58.603861       8 log.go:172] (0xc001964aa0) (1) Data frame handling
I0120 17:05:58.603969       8 log.go:172] (0xc001964aa0) (1) Data frame sent
I0120 17:05:58.604115       8 log.go:172] (0xc0025a4790) (0xc0016406e0) Stream removed, broadcasting: 3
I0120 17:05:58.604166       8 log.go:172] (0xc0025a4790) (0xc001964aa0) Stream removed, broadcasting: 1
I0120 17:05:58.604190       8 log.go:172] (0xc0025a4790) Go away received
I0120 17:05:58.604544       8 log.go:172] (0xc0025a4790) (0xc001964aa0) Stream removed, broadcasting: 1
I0120 17:05:58.604574       8 log.go:172] (0xc0025a4790) (0xc0016406e0) Stream removed, broadcasting: 3
I0120 17:05:58.604594       8 log.go:172] (0xc0025a4790) (0xc0003be820) Stream removed, broadcasting: 5
Jan 20 17:05:58.604: INFO: Exec stderr: ""
Jan 20 17:05:58.604: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-629xq PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 17:05:58.604: INFO: >>> kubeConfig: /root/.kube/config
I0120 17:05:58.718804       8 log.go:172] (0xc00093fa20) (0xc001640a00) Create stream
I0120 17:05:58.718944       8 log.go:172] (0xc00093fa20) (0xc001640a00) Stream added, broadcasting: 1
I0120 17:05:58.730921       8 log.go:172] (0xc00093fa20) Reply frame received for 1
I0120 17:05:58.730979       8 log.go:172] (0xc00093fa20) (0xc001964b40) Create stream
I0120 17:05:58.730992       8 log.go:172] (0xc00093fa20) (0xc001964b40) Stream added, broadcasting: 3
I0120 17:05:58.735500       8 log.go:172] (0xc00093fa20) Reply frame received for 3
I0120 17:05:58.735554       8 log.go:172] (0xc00093fa20) (0xc001946320) Create stream
I0120 17:05:58.735583       8 log.go:172] (0xc00093fa20) (0xc001946320) Stream added, broadcasting: 5
I0120 17:05:58.740808       8 log.go:172] (0xc00093fa20) Reply frame received for 5
I0120 17:05:58.862197       8 log.go:172] (0xc00093fa20) Data frame received for 3
I0120 17:05:58.862378       8 log.go:172] (0xc001964b40) (3) Data frame handling
I0120 17:05:58.862462       8 log.go:172] (0xc001964b40) (3) Data frame sent
I0120 17:05:58.985357       8 log.go:172] (0xc00093fa20) Data frame received for 1
I0120 17:05:58.985448       8 log.go:172] (0xc00093fa20) (0xc001964b40) Stream removed, broadcasting: 3
I0120 17:05:58.985556       8 log.go:172] (0xc001640a00) (1) Data frame handling
I0120 17:05:58.985581       8 log.go:172] (0xc001640a00) (1) Data frame sent
I0120 17:05:58.985608       8 log.go:172] (0xc00093fa20) (0xc001640a00) Stream removed, broadcasting: 1
I0120 17:05:58.985647       8 log.go:172] (0xc00093fa20) (0xc001946320) Stream removed, broadcasting: 5
I0120 17:05:58.985667       8 log.go:172] (0xc00093fa20) Go away received
I0120 17:05:58.985843       8 log.go:172] (0xc00093fa20) (0xc001640a00) Stream removed, broadcasting: 1
I0120 17:05:58.985860       8 log.go:172] (0xc00093fa20) (0xc001964b40) Stream removed, broadcasting: 3
I0120 17:05:58.985870       8 log.go:172] (0xc00093fa20) (0xc001946320) Stream removed, broadcasting: 5
Jan 20 17:05:58.985: INFO: Exec stderr: ""
Jan 20 17:05:58.985: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-629xq PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 20 17:05:58.986: INFO: >>> kubeConfig: /root/.kube/config
I0120 17:05:59.046041       8 log.go:172] (0xc0012a4840) (0xc0019468c0) Create stream
I0120 17:05:59.046127       8 log.go:172] (0xc0012a4840) (0xc0019468c0) Stream added, broadcasting: 1
I0120 17:05:59.053958       8 log.go:172] (0xc0012a4840) Reply frame received for 1
I0120 17:05:59.054028       8 log.go:172] (0xc0012a4840) (0xc000e7a3c0) Create stream
I0120 17:05:59.054049       8 log.go:172] (0xc0012a4840) (0xc000e7a3c0) Stream added, broadcasting: 3
I0120 17:05:59.055137       8 log.go:172] (0xc0012a4840) Reply frame received for 3
I0120 17:05:59.055166       8 log.go:172] (0xc0012a4840) (0xc001946a00) Create stream
I0120 17:05:59.055193       8 log.go:172] (0xc0012a4840) (0xc001946a00) Stream added, broadcasting: 5
I0120 17:05:59.056306       8 log.go:172] (0xc0012a4840) Reply frame received for 5
I0120 17:05:59.155618       8 log.go:172] (0xc0012a4840) Data frame received for 3
I0120 17:05:59.155687       8 log.go:172] (0xc000e7a3c0) (3) Data frame handling
I0120 17:05:59.155721       8 log.go:172] (0xc000e7a3c0) (3) Data frame sent
I0120 17:05:59.260888       8 log.go:172] (0xc0012a4840) Data frame received for 1
I0120 17:05:59.260926       8 log.go:172] (0xc0019468c0) (1) Data frame handling
I0120 17:05:59.260950       8 log.go:172] (0xc0019468c0) (1) Data frame sent
I0120 17:05:59.260984       8 log.go:172] (0xc0012a4840) (0xc0019468c0) Stream removed, broadcasting: 1
I0120 17:05:59.261034       8 log.go:172] (0xc0012a4840) (0xc000e7a3c0) Stream removed, broadcasting: 3
I0120 17:05:59.261502       8 log.go:172] (0xc0012a4840) (0xc001946a00) Stream removed, broadcasting: 5
I0120 17:05:59.261549       8 log.go:172] (0xc0012a4840) (0xc0019468c0) Stream removed, broadcasting: 1
I0120 17:05:59.261563       8 log.go:172] (0xc0012a4840) (0xc000e7a3c0) Stream removed, broadcasting: 3
I0120 17:05:59.261573       8 log.go:172] (0xc0012a4840) (0xc001946a00) Stream removed, broadcasting: 5
Jan 20 17:05:59.261: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jan 20 17:05:59.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-629xq" for this suite.
Jan 20 17:06:55.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan 20 17:06:55.391: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-629xq, resource: bindings, ignored listing per whitelist
Jan 20 17:06:55.499: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-629xq deletion completed in 56.223732199s

• [SLOW TEST:14492.720 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSJan 20 17:06:55.499: INFO: Running AfterSuite actions on all nodes
Jan 20 17:06:55.499: INFO: Running AfterSuite actions on node 1
Jan 20 17:06:55.499: INFO: Skipping dumping logs from cluster

Ran 199 of 2164 Specs in 22780.693 seconds
SUCCESS! -- 199 Passed | 0 Failed | 0 Pending | 1965 Skipped PASS