I0722 10:47:05.780538 7 e2e.go:224] Starting e2e run "aed10f48-cc08-11ea-aa05-0242ac11000b" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1595414825 - Will randomize all specs Will run 201 of 2164 specs Jul 22 10:47:05.962: INFO: >>> kubeConfig: /root/.kube/config Jul 22 10:47:05.964: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 22 10:47:05.982: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 22 10:47:06.006: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 22 10:47:06.006: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 22 10:47:06.006: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 22 10:47:06.016: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 22 10:47:06.017: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 22 10:47:06.017: INFO: e2e test version: v1.13.12 Jul 22 10:47:06.018: INFO: kube-apiserver version: v1.13.12 SSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 22 10:47:06.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset Jul 22 10:47:06.121: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 22 10:47:06.123: INFO: Creating ReplicaSet my-hostname-basic-af4f3937-cc08-11ea-aa05-0242ac11000b Jul 22 10:47:06.131: INFO: Pod name my-hostname-basic-af4f3937-cc08-11ea-aa05-0242ac11000b: Found 0 pods out of 1 Jul 22 10:47:11.135: INFO: Pod name my-hostname-basic-af4f3937-cc08-11ea-aa05-0242ac11000b: Found 1 pods out of 1 Jul 22 10:47:11.135: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-af4f3937-cc08-11ea-aa05-0242ac11000b" is running Jul 22 10:47:11.138: INFO: Pod "my-hostname-basic-af4f3937-cc08-11ea-aa05-0242ac11000b-msqrn" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-22 10:47:06 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-22 10:47:09 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-22 10:47:09 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-22 10:47:06 +0000 UTC Reason: Message:}]) Jul 22 10:47:11.138: INFO: Trying to dial the pod Jul 22 10:47:16.146: INFO: Controller my-hostname-basic-af4f3937-cc08-11ea-aa05-0242ac11000b: Got expected result from replica 1 [my-hostname-basic-af4f3937-cc08-11ea-aa05-0242ac11000b-msqrn]: "my-hostname-basic-af4f3937-cc08-11ea-aa05-0242ac11000b-msqrn", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 22 10:47:16.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-replicaset-9djsk" for this suite. Jul 22 10:47:22.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 22 10:47:22.589: INFO: namespace: e2e-tests-replicaset-9djsk, resource: bindings, ignored listing per whitelist Jul 22 10:47:22.589: INFO: namespace e2e-tests-replicaset-9djsk deletion completed in 6.440394091s • [SLOW TEST:16.571 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 22 10:47:22.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating projection with secret that has name projected-secret-test-b92a134e-cc08-11ea-aa05-0242ac11000b STEP: Creating a pod to test consume secrets Jul 22 10:47:22.740: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b935ca1b-cc08-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-gqkdp" to be "success or failure" Jul 22 10:47:22.744: INFO: Pod "pod-projected-secrets-b935ca1b-cc08-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.343795ms Jul 22 10:47:24.847: INFO: Pod "pod-projected-secrets-b935ca1b-cc08-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106184905s Jul 22 10:47:26.851: INFO: Pod "pod-projected-secrets-b935ca1b-cc08-11ea-aa05-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.110023258s Jul 22 10:47:28.855: INFO: Pod "pod-projected-secrets-b935ca1b-cc08-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.114158665s STEP: Saw pod success Jul 22 10:47:28.855: INFO: Pod "pod-projected-secrets-b935ca1b-cc08-11ea-aa05-0242ac11000b" satisfied condition "success or failure" Jul 22 10:47:28.857: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-b935ca1b-cc08-11ea-aa05-0242ac11000b container projected-secret-volume-test: STEP: delete the pod Jul 22 10:47:29.000: INFO: Waiting for pod pod-projected-secrets-b935ca1b-cc08-11ea-aa05-0242ac11000b to disappear Jul 22 10:47:29.061: INFO: Pod pod-projected-secrets-b935ca1b-cc08-11ea-aa05-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 22 10:47:29.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-gqkdp" for this suite. Jul 22 10:47:37.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 22 10:47:37.661: INFO: namespace: e2e-tests-projected-gqkdp, resource: bindings, ignored listing per whitelist Jul 22 10:47:37.668: INFO: namespace e2e-tests-projected-gqkdp deletion completed in 8.603235622s • [SLOW TEST:15.079 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 22 10:47:37.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-k2qnh Jul 22 10:47:42.989: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-k2qnh STEP: checking the pod's current state and verifying that restartCount is present Jul 22 10:47:42.993: INFO: Initial restart count of pod liveness-exec is 0 Jul 22 10:48:37.434: INFO: Restart count of pod e2e-tests-container-probe-k2qnh/liveness-exec is now 1 (54.441608704s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 22 10:48:37.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-container-probe-k2qnh" for this suite. Jul 22 10:48:43.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 22 10:48:43.545: INFO: namespace: e2e-tests-container-probe-k2qnh, resource: bindings, ignored listing per whitelist Jul 22 10:48:43.600: INFO: namespace e2e-tests-container-probe-k2qnh deletion completed in 6.084021149s • [SLOW TEST:65.931 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 22 10:48:43.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 22 10:48:43.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-z9bdx' Jul 22 10:48:52.529: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 22 10:48:52.529: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jul 22 10:48:54.567: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-x8lqb] Jul 22 10:48:54.567: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-x8lqb" in namespace "e2e-tests-kubectl-z9bdx" to be "running and ready" Jul 22 10:48:54.571: INFO: Pod "e2e-test-nginx-rc-x8lqb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.584304ms Jul 22 10:48:56.575: INFO: Pod "e2e-test-nginx-rc-x8lqb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007182586s Jul 22 10:48:59.417: INFO: Pod "e2e-test-nginx-rc-x8lqb": Phase="Running", Reason="", readiness=true. Elapsed: 4.850140264s Jul 22 10:48:59.418: INFO: Pod "e2e-test-nginx-rc-x8lqb" satisfied condition "running and ready" Jul 22 10:48:59.418: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-x8lqb] Jul 22 10:48:59.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-z9bdx' Jul 22 10:48:59.636: INFO: stderr: "" Jul 22 10:48:59.636: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303 Jul 22 10:48:59.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-z9bdx' Jul 22 10:48:59.764: INFO: stderr: "" Jul 22 10:48:59.764: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 22 10:48:59.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-z9bdx" for this suite. Jul 22 10:49:22.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 22 10:49:22.220: INFO: namespace: e2e-tests-kubectl-z9bdx, resource: bindings, ignored listing per whitelist Jul 22 10:49:22.337: INFO: namespace e2e-tests-kubectl-z9bdx deletion completed in 22.258650667s • [SLOW TEST:38.737 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 22 10:49:22.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 22 10:49:22.577: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jul 22 10:49:22.752: INFO: Number of nodes with available pods: 0 Jul 22 10:49:22.752: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jul 22 10:49:22.817: INFO: Number of nodes with available pods: 0 Jul 22 10:49:22.817: INFO: Node hunter-worker is running more than one daemon pod Jul 22 10:49:23.821: INFO: Number of nodes with available pods: 0 Jul 22 10:49:23.821: INFO: Node hunter-worker is running more than one daemon pod Jul 22 10:49:24.886: INFO: Number of nodes with available pods: 0 Jul 22 10:49:24.886: INFO: Node hunter-worker is running more than one daemon pod Jul 22 10:49:25.820: INFO: Number of nodes with available pods: 0 Jul 22 10:49:25.820: INFO: Node hunter-worker is running more than one daemon pod Jul 22 10:49:26.821: INFO: Number of nodes with available pods: 1 Jul 22 10:49:26.821: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jul 22 10:49:26.849: INFO: Number of nodes with available pods: 1 Jul 22 10:49:26.850: INFO: Number of running nodes: 0, number of available pods: 1 Jul 22 10:49:27.853: INFO: Number of nodes with available pods: 0 Jul 22 10:49:27.854: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jul 22 10:49:28.064: INFO: Number of nodes with available pods: 0 Jul 22 10:49:28.064: INFO: Node hunter-worker is running more than one daemon pod Jul 22 10:49:29.184: INFO: Number of nodes with available pods: 0 Jul 22 10:49:29.184: INFO: Node hunter-worker is running more than one daemon pod Jul 22 10:49:30.178: INFO: Number of nodes with available pods: 0 Jul 22 10:49:30.178: INFO: Node hunter-worker is running more than one daemon pod Jul 22 10:49:31.412: INFO: Number of nodes with available pods: 0 Jul 22 10:49:31.412: INFO: Node hunter-worker is running more than one daemon pod Jul 22 10:49:32.089: INFO: Number of nodes with available pods: 0 Jul 22 10:49:32.089: INFO: Node hunter-worker is running more than one daemon pod Jul 22 10:49:33.088: INFO: Number of nodes with available pods: 0 Jul 22 10:49:33.088: INFO: Node hunter-worker is running more than one daemon pod Jul 22 10:49:34.069: INFO: Number of nodes with available pods: 0 Jul 22 10:49:34.069: INFO: Node hunter-worker is running more than one daemon pod Jul 22 10:49:35.076: INFO: Number of nodes with available pods: 0 Jul 22 10:49:35.076: INFO: Node hunter-worker is running more than one daemon pod Jul 22 10:49:36.068: INFO: Number of nodes with available pods: 0 Jul 22 10:49:36.068: INFO: Node hunter-worker is running more than one daemon pod Jul 22 10:49:37.068: INFO: Number of nodes with available pods: 0 Jul 22 10:49:37.068: INFO: Node hunter-worker is running more than one daemon pod Jul 22 10:49:38.505: INFO: Number of nodes with available pods: 0 Jul 22 10:49:38.505: INFO: Node hunter-worker is running more than one daemon pod Jul 22 10:49:39.068: INFO: Number of nodes with available pods: 0 Jul 22 10:49:39.068: INFO: Node hunter-worker is running more than one daemon pod Jul 22 10:49:40.069: INFO: Number of nodes with available pods: 0 Jul 22 10:49:40.069: INFO: Node hunter-worker is running more than one daemon pod Jul 22 10:49:41.202: INFO: Number of nodes with available pods: 0 Jul 22 10:49:41.202: INFO: Node hunter-worker is running more than one daemon pod Jul 22 10:49:42.069: INFO: Number of nodes with available pods: 0 Jul 22 10:49:42.069: INFO: Node hunter-worker is running more than one daemon pod Jul 22 10:49:43.069: INFO: Number of nodes with available pods: 1 Jul 22 10:49:43.069: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-4f568, will wait for the garbage collector to delete the pods Jul 22 10:49:43.135: INFO: Deleting DaemonSet.extensions daemon-set took: 6.752789ms Jul 22 10:49:43.235: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.294445ms Jul 22 10:49:47.650: INFO: Number of nodes with available pods: 0 Jul 22 10:49:47.650: INFO: Number of running nodes: 0, number of available pods: 0 Jul 22 10:49:47.654: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-4f568/daemonsets","resourceVersion":"2168135"},"items":null} Jul 22 10:49:47.657: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-4f568/pods","resourceVersion":"2168135"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 22 10:49:47.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-daemonsets-4f568" for this suite. Jul 22 10:49:53.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 22 10:49:53.809: INFO: namespace: e2e-tests-daemonsets-4f568, resource: bindings, ignored listing per whitelist Jul 22 10:49:53.909: INFO: namespace e2e-tests-daemonsets-4f568 deletion completed in 6.185235043s • [SLOW TEST:31.572 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 22 10:49:53.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating configMap with name configmap-projected-all-test-volume-1358e5c9-cc09-11ea-aa05-0242ac11000b STEP: Creating secret with name secret-projected-all-test-volume-1358e598-cc09-11ea-aa05-0242ac11000b STEP: Creating a pod to test Check all projections for projected volume plugin Jul 22 10:49:54.007: INFO: Waiting up to 5m0s for pod "projected-volume-1358e539-cc09-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-tb28p" to be "success or failure" Jul 22 10:49:54.041: INFO: Pod "projected-volume-1358e539-cc09-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 33.725713ms Jul 22 10:49:56.064: INFO: Pod "projected-volume-1358e539-cc09-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057679216s Jul 22 10:49:58.203: INFO: Pod "projected-volume-1358e539-cc09-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.196004229s Jul 22 10:50:00.207: INFO: Pod "projected-volume-1358e539-cc09-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.200470923s Jul 22 10:50:02.212: INFO: Pod "projected-volume-1358e539-cc09-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.205230914s STEP: Saw pod success Jul 22 10:50:02.212: INFO: Pod "projected-volume-1358e539-cc09-11ea-aa05-0242ac11000b" satisfied condition "success or failure" Jul 22 10:50:02.215: INFO: Trying to get logs from node hunter-worker2 pod projected-volume-1358e539-cc09-11ea-aa05-0242ac11000b container projected-all-volume-test: STEP: delete the pod Jul 22 10:50:02.270: INFO: Waiting for pod projected-volume-1358e539-cc09-11ea-aa05-0242ac11000b to disappear Jul 22 10:50:02.291: INFO: Pod projected-volume-1358e539-cc09-11ea-aa05-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 22 10:50:02.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-tb28p" for this suite. Jul 22 10:50:12.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 22 10:50:12.404: INFO: namespace: e2e-tests-projected-tb28p, resource: bindings, ignored listing per whitelist Jul 22 10:50:12.408: INFO: namespace e2e-tests-projected-tb28p deletion completed in 10.112851528s • [SLOW TEST:18.499 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 22 10:50:12.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 22 10:50:12.552: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1e6c00e1-cc09-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-n4zvd" to be "success or failure" Jul 22 10:50:12.569: INFO: Pod "downwardapi-volume-1e6c00e1-cc09-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.436788ms Jul 22 10:50:14.573: INFO: Pod "downwardapi-volume-1e6c00e1-cc09-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02056421s Jul 22 10:50:16.609: INFO: Pod "downwardapi-volume-1e6c00e1-cc09-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056913249s STEP: Saw pod success Jul 22 10:50:16.609: INFO: Pod "downwardapi-volume-1e6c00e1-cc09-11ea-aa05-0242ac11000b" satisfied condition "success or failure" Jul 22 10:50:16.610: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-1e6c00e1-cc09-11ea-aa05-0242ac11000b container client-container: STEP: delete the pod Jul 22 10:50:16.913: INFO: Waiting for pod downwardapi-volume-1e6c00e1-cc09-11ea-aa05-0242ac11000b to disappear Jul 22 10:50:17.077: INFO: Pod downwardapi-volume-1e6c00e1-cc09-11ea-aa05-0242ac11000b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 22 10:50:17.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-projected-n4zvd" for this suite. Jul 22 10:50:23.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 22 10:50:23.148: INFO: namespace: e2e-tests-projected-n4zvd, resource: bindings, ignored listing per whitelist Jul 22 10:50:23.214: INFO: namespace e2e-tests-projected-n4zvd deletion completed in 6.133145986s • [SLOW TEST:10.806 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 22 10:50:23.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0722 10:50:53.913839 7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 22 10:50:53.913: INFO: For apiserver_request_count: For apiserver_request_latencies_summary: For etcd_helper_cache_entry_count: For etcd_helper_cache_hit_count: For etcd_helper_cache_miss_count: For etcd_request_cache_add_latencies_summary: For etcd_request_cache_get_latencies_summary: For etcd_request_latencies_summary: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 22 10:50:53.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-gc-kjqfb" for this suite. Jul 22 10:51:02.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 22 10:51:02.128: INFO: namespace: e2e-tests-gc-kjqfb, resource: bindings, ignored listing per whitelist Jul 22 10:51:02.191: INFO: namespace e2e-tests-gc-kjqfb deletion completed in 8.274892862s • [SLOW TEST:38.977 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 22 10:51:02.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test downward API volume plugin Jul 22 10:51:02.332: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3c10ca34-cc09-11ea-aa05-0242ac11000b" in namespace "e2e-tests-downward-api-nd4wv" to be "success or failure" Jul 22 10:51:02.343: INFO: Pod "downwardapi-volume-3c10ca34-cc09-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.715385ms Jul 22 10:51:04.347: INFO: Pod "downwardapi-volume-3c10ca34-cc09-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014686568s Jul 22 10:51:06.350: INFO: Pod "downwardapi-volume-3c10ca34-cc09-11ea-aa05-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.018285843s Jul 22 10:51:08.354: INFO: Pod "downwardapi-volume-3c10ca34-cc09-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022251736s STEP: Saw pod success Jul 22 10:51:08.355: INFO: Pod "downwardapi-volume-3c10ca34-cc09-11ea-aa05-0242ac11000b" satisfied condition "success or failure" Jul 22 10:51:08.357: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-3c10ca34-cc09-11ea-aa05-0242ac11000b container client-container: STEP: delete the pod Jul 22 10:51:08.375: INFO: Waiting for pod downwardapi-volume-3c10ca34-cc09-11ea-aa05-0242ac11000b to disappear Jul 22 10:51:08.390: INFO: Pod downwardapi-volume-3c10ca34-cc09-11ea-aa05-0242ac11000b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 22 10:51:08.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-downward-api-nd4wv" for this suite. Jul 22 10:51:14.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 22 10:51:14.515: INFO: namespace: e2e-tests-downward-api-nd4wv, resource: bindings, ignored listing per whitelist Jul 22 10:51:14.541: INFO: namespace e2e-tests-downward-api-nd4wv deletion completed in 6.148072651s • [SLOW TEST:12.350 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 22 10:51:14.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating secret with name secret-test-436e2256-cc09-11ea-aa05-0242ac11000b STEP: Creating a pod to test consume secrets Jul 22 10:51:14.650: INFO: Waiting up to 5m0s for pod "pod-secrets-43704ee1-cc09-11ea-aa05-0242ac11000b" in namespace "e2e-tests-secrets-fv29z" to be "success or failure" Jul 22 10:51:14.654: INFO: Pod "pod-secrets-43704ee1-cc09-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.376184ms Jul 22 10:51:16.676: INFO: Pod "pod-secrets-43704ee1-cc09-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026387723s Jul 22 10:51:18.776: INFO: Pod "pod-secrets-43704ee1-cc09-11ea-aa05-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.125851973s Jul 22 10:51:20.780: INFO: Pod "pod-secrets-43704ee1-cc09-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.129560404s STEP: Saw pod success Jul 22 10:51:20.780: INFO: Pod "pod-secrets-43704ee1-cc09-11ea-aa05-0242ac11000b" satisfied condition "success or failure" Jul 22 10:51:20.782: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-43704ee1-cc09-11ea-aa05-0242ac11000b container secret-env-test: STEP: delete the pod Jul 22 10:51:20.834: INFO: Waiting for pod pod-secrets-43704ee1-cc09-11ea-aa05-0242ac11000b to disappear Jul 22 10:51:20.873: INFO: Pod pod-secrets-43704ee1-cc09-11ea-aa05-0242ac11000b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 22 10:51:20.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-fv29z" for this suite. Jul 22 10:51:26.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 22 10:51:26.922: INFO: namespace: e2e-tests-secrets-fv29z, resource: bindings, ignored listing per whitelist Jul 22 10:51:26.968: INFO: namespace e2e-tests-secrets-fv29z deletion completed in 6.090044807s • [SLOW TEST:12.426 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 22 10:51:26.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: Creating a pod to test hostPath mode Jul 22 10:51:27.094: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-zgnx2" to be "success or failure" Jul 22 10:51:27.192: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 97.351074ms Jul 22 10:51:29.209: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114728862s Jul 22 10:51:31.234: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139559162s Jul 22 10:51:33.238: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.14381958s STEP: Saw pod success Jul 22 10:51:33.238: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jul 22 10:51:33.242: INFO: Trying to get logs from node hunter-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Jul 22 10:51:33.307: INFO: Waiting for pod pod-host-path-test to disappear Jul 22 10:51:33.313: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 22 10:51:33.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-hostpath-zgnx2" for this suite. Jul 22 10:51:39.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 22 10:51:39.387: INFO: namespace: e2e-tests-hostpath-zgnx2, resource: bindings, ignored listing per whitelist Jul 22 10:51:39.403: INFO: namespace e2e-tests-hostpath-zgnx2 deletion completed in 6.08285534s • [SLOW TEST:12.434 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 22 10:51:39.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 22 10:51:39.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-jz8k8' Jul 22 10:51:39.713: INFO: stderr: "" Jul 22 10:51:39.713: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jul 22 10:51:44.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-jz8k8 -o json' Jul 22 10:51:44.854: INFO: stderr: "" Jul 22 10:51:44.854: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-07-22T10:51:39Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-jz8k8\",\n \"resourceVersion\": \"2168799\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-jz8k8/pods/e2e-test-nginx-pod\",\n \"uid\": \"525fe736-cc09-11ea-b2c9-0242ac120008\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-sx5c8\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"hunter-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-sx5c8\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-sx5c8\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-22T10:51:39Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-22T10:51:42Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-22T10:51:42Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-07-22T10:51:39Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://a8a2332cb39f4cb8bb4ad417a5b0ae2eb378ec055ce473ceaf7c87276100446e\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-07-22T10:51:42Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.14\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-07-22T10:51:39Z\"\n }\n}\n" STEP: replace the image in the pod Jul 22 10:51:44.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=e2e-tests-kubectl-jz8k8' Jul 22 10:51:45.866: INFO: stderr: "" Jul 22 10:51:45.866: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 Jul 22 10:51:46.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-jz8k8' Jul 22 10:51:50.033: INFO: stderr: "" Jul 22 10:51:50.033: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 22 10:51:50.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-kubectl-jz8k8" for this suite. Jul 22 10:51:56.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 22 10:51:56.073: INFO: namespace: e2e-tests-kubectl-jz8k8, resource: bindings, ignored listing per whitelist Jul 22 10:51:56.129: INFO: namespace e2e-tests-kubectl-jz8k8 deletion completed in 6.089031135s • [SLOW TEST:16.726 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 22 10:51:56.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-bsksl I0722 10:51:56.266520 7 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-bsksl, replica count: 1 I0722 10:51:57.316998 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0722 10:51:58.317243 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0722 10:51:59.317474 7 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jul 22 10:51:59.446: INFO: Created: latency-svc-8tb4m Jul 22 10:51:59.467: INFO: Got endpoints: latency-svc-8tb4m [49.735902ms] Jul 22 10:51:59.488: INFO: Created: latency-svc-tthhm Jul 22 10:51:59.545: INFO: Got endpoints: latency-svc-tthhm [78.014206ms] Jul 22 10:51:59.547: INFO: Created: latency-svc-ddgn5 Jul 22 10:51:59.555: INFO: Got endpoints: latency-svc-ddgn5 [87.647918ms] Jul 22 10:51:59.584: INFO: Created: latency-svc-45stz Jul 22 10:51:59.608: INFO: Got endpoints: latency-svc-45stz [140.50322ms] Jul 22 10:51:59.639: INFO: Created: latency-svc-x6rmn Jul 22 10:51:59.707: INFO: Got endpoints: latency-svc-x6rmn [239.51661ms] Jul 22 10:51:59.709: INFO: Created: latency-svc-gwbv9 Jul 22 10:51:59.736: INFO: Got endpoints: latency-svc-gwbv9 [268.387164ms] Jul 22 10:51:59.899: INFO: Created: latency-svc-mgg4p Jul 22 10:51:59.926: INFO: Got endpoints: latency-svc-mgg4p [458.698219ms] Jul 22 10:51:59.969: INFO: Created: latency-svc-grp88 Jul 22 10:51:59.992: INFO: Got endpoints: latency-svc-grp88 [525.008368ms] Jul 22 10:52:00.073: INFO: Created: latency-svc-ztt9k Jul 22 10:52:00.077: INFO: Got endpoints: latency-svc-ztt9k [609.966014ms] Jul 22 10:52:00.150: INFO: Created: latency-svc-gd5rb Jul 22 10:52:00.193: INFO: Got endpoints: latency-svc-gd5rb [725.349457ms] Jul 22 10:52:00.227: INFO: Created: latency-svc-dfxbn Jul 22 10:52:00.257: INFO: Got endpoints: latency-svc-dfxbn [789.41056ms] Jul 22 10:52:00.372: INFO: Created: latency-svc-jgwgb Jul 22 10:52:00.375: INFO: Got endpoints: latency-svc-jgwgb [908.17416ms] Jul 22 10:52:00.510: INFO: Created: latency-svc-lsscz Jul 22 10:52:00.515: INFO: Got endpoints: latency-svc-lsscz [1.047309963s] Jul 22 10:52:00.566: INFO: Created: latency-svc-rx9wv Jul 22 10:52:00.589: INFO: Got endpoints: latency-svc-rx9wv [1.12156491s] Jul 22 10:52:00.653: INFO: Created: latency-svc-8prnp Jul 22 10:52:00.725: INFO: Created: latency-svc-sr298 Jul 22 10:52:00.725: INFO: Got endpoints: latency-svc-8prnp [1.257920575s] Jul 22 10:52:00.751: INFO: Got endpoints: latency-svc-sr298 [1.283828164s] Jul 22 10:52:00.821: INFO: Created: latency-svc-88556 Jul 22 10:52:00.824: INFO: Got endpoints: latency-svc-88556 [1.278845217s] Jul 22 10:52:00.870: INFO: Created: latency-svc-ckfnw Jul 22 10:52:00.905: INFO: Got endpoints: latency-svc-ckfnw [1.350005308s] Jul 22 10:52:00.983: INFO: Created: latency-svc-tb72f Jul 22 10:52:00.998: INFO: Got endpoints: latency-svc-tb72f [1.389907553s] Jul 22 10:52:01.026: INFO: Created: latency-svc-wgbvs Jul 22 10:52:01.040: INFO: Got endpoints: latency-svc-wgbvs [1.333006359s] Jul 22 10:52:01.073: INFO: Created: latency-svc-qsr25 Jul 22 10:52:01.114: INFO: Got endpoints: latency-svc-qsr25 [1.378228619s] Jul 22 10:52:01.127: INFO: Created: latency-svc-8rtv4 Jul 22 10:52:01.181: INFO: Got endpoints: latency-svc-8rtv4 [1.25503934s] Jul 22 10:52:01.271: INFO: Created: latency-svc-pm7w6 Jul 22 10:52:01.274: INFO: Got endpoints: latency-svc-pm7w6 [1.281439397s] Jul 22 10:52:01.331: INFO: Created: latency-svc-pvpzs Jul 22 10:52:01.348: INFO: Got endpoints: latency-svc-pvpzs [1.270508931s] Jul 22 10:52:01.428: INFO: Created: latency-svc-pfvjt Jul 22 10:52:01.438: INFO: Got endpoints: latency-svc-pfvjt [1.244915135s] Jul 22 10:52:01.457: INFO: Created: latency-svc-hql9m Jul 22 10:52:01.475: INFO: Got endpoints: latency-svc-hql9m [1.217899321s] Jul 22 10:52:01.505: INFO: Created: latency-svc-66b8s Jul 22 10:52:01.516: INFO: Got endpoints: latency-svc-66b8s [1.140924342s] Jul 22 10:52:01.570: INFO: Created: latency-svc-fbdr2 Jul 22 10:52:01.582: INFO: Got endpoints: latency-svc-fbdr2 [1.06752086s] Jul 22 10:52:01.613: INFO: Created: latency-svc-4j79x Jul 22 10:52:01.667: INFO: Got endpoints: latency-svc-4j79x [1.078158227s] Jul 22 10:52:01.732: INFO: Created: latency-svc-xznpp Jul 22 10:52:01.751: INFO: Got endpoints: latency-svc-xznpp [1.025780334s] Jul 22 10:52:01.799: INFO: Created: latency-svc-n76h7 Jul 22 10:52:01.817: INFO: Got endpoints: latency-svc-n76h7 [1.065959123s] Jul 22 10:52:02.002: INFO: Created: latency-svc-w4jzn Jul 22 10:52:02.051: INFO: Got endpoints: latency-svc-w4jzn [1.227158406s] Jul 22 10:52:02.198: INFO: Created: latency-svc-xxk62 Jul 22 10:52:02.204: INFO: Got endpoints: latency-svc-xxk62 [1.298998047s] Jul 22 10:52:02.261: INFO: Created: latency-svc-8cgwc Jul 22 10:52:02.286: INFO: Got endpoints: latency-svc-8cgwc [1.288800249s] Jul 22 10:52:02.348: INFO: Created: latency-svc-tsx6l Jul 22 10:52:02.351: INFO: Got endpoints: latency-svc-tsx6l [1.311238755s] Jul 22 10:52:02.410: INFO: Created: latency-svc-tr6dt Jul 22 10:52:02.424: INFO: Got endpoints: latency-svc-tr6dt [1.310456021s] Jul 22 10:52:02.511: INFO: Created: latency-svc-cj2lh Jul 22 10:52:02.514: INFO: Got endpoints: latency-svc-cj2lh [1.332499057s] Jul 22 10:52:02.549: INFO: Created: latency-svc-kfs7w Jul 22 10:52:02.562: INFO: Got endpoints: latency-svc-kfs7w [1.28847125s] Jul 22 10:52:02.591: INFO: Created: latency-svc-4qm9r Jul 22 10:52:02.608: INFO: Got endpoints: latency-svc-4qm9r [1.260357734s] Jul 22 10:52:02.665: INFO: Created: latency-svc-2j4zw Jul 22 10:52:02.677: INFO: Got endpoints: latency-svc-2j4zw [1.239438171s] Jul 22 10:52:02.716: INFO: Created: latency-svc-bxk7r Jul 22 10:52:02.743: INFO: Got endpoints: latency-svc-bxk7r [1.268417447s] Jul 22 10:52:02.815: INFO: Created: latency-svc-r8gwc Jul 22 10:52:02.821: INFO: Got endpoints: latency-svc-r8gwc [1.30480674s] Jul 22 10:52:02.854: INFO: Created: latency-svc-pfntz Jul 22 10:52:02.875: INFO: Got endpoints: latency-svc-pfntz [1.293235382s] Jul 22 10:52:02.914: INFO: Created: latency-svc-thnnk Jul 22 10:52:02.971: INFO: Got endpoints: latency-svc-thnnk [1.303449466s] Jul 22 10:52:02.986: INFO: Created: latency-svc-s7p87 Jul 22 10:52:03.027: INFO: Got endpoints: latency-svc-s7p87 [1.276418547s] Jul 22 10:52:03.058: INFO: Created: latency-svc-kfsf2 Jul 22 10:52:03.144: INFO: Got endpoints: latency-svc-kfsf2 [1.32677153s] Jul 22 10:52:03.214: INFO: Created: latency-svc-pp4g4 Jul 22 10:52:03.230: INFO: Got endpoints: latency-svc-pp4g4 [1.178691101s] Jul 22 10:52:03.294: INFO: Created: latency-svc-jt5gz Jul 22 10:52:03.297: INFO: Got endpoints: latency-svc-jt5gz [1.092724445s] Jul 22 10:52:03.317: INFO: Created: latency-svc-5556p Jul 22 10:52:03.333: INFO: Got endpoints: latency-svc-5556p [1.046904812s] Jul 22 10:52:03.364: INFO: Created: latency-svc-fkzbf Jul 22 10:52:03.435: INFO: Got endpoints: latency-svc-fkzbf [1.084440632s] Jul 22 10:52:03.478: INFO: Created: latency-svc-fn28q Jul 22 10:52:03.494: INFO: Got endpoints: latency-svc-fn28q [1.070059323s] Jul 22 10:52:03.582: INFO: Created: latency-svc-62x4d Jul 22 10:52:03.591: INFO: Got endpoints: latency-svc-62x4d [1.077488554s] Jul 22 10:52:03.616: INFO: Created: latency-svc-m7kwz Jul 22 10:52:03.633: INFO: Got endpoints: latency-svc-m7kwz [1.071054596s] Jul 22 10:52:03.652: INFO: Created: latency-svc-gjzk8 Jul 22 10:52:03.669: INFO: Got endpoints: latency-svc-gjzk8 [1.061272666s] Jul 22 10:52:03.755: INFO: Created: latency-svc-4cwqd Jul 22 10:52:03.760: INFO: Got endpoints: latency-svc-4cwqd [1.08253025s] Jul 22 10:52:03.796: INFO: Created: latency-svc-n9tzt Jul 22 10:52:03.808: INFO: Got endpoints: latency-svc-n9tzt [1.06475771s] Jul 22 10:52:03.845: INFO: Created: latency-svc-chgzk Jul 22 10:52:03.915: INFO: Got endpoints: latency-svc-chgzk [1.093233926s] Jul 22 10:52:03.941: INFO: Created: latency-svc-72rqj Jul 22 10:52:03.953: INFO: Got endpoints: latency-svc-72rqj [1.077177872s] Jul 22 10:52:03.976: INFO: Created: latency-svc-c49z4 Jul 22 10:52:03.989: INFO: Got endpoints: latency-svc-c49z4 [1.018108209s] Jul 22 10:52:04.072: INFO: Created: latency-svc-hxfb6 Jul 22 10:52:04.085: INFO: Got endpoints: latency-svc-hxfb6 [1.057316893s] Jul 22 10:52:04.102: INFO: Created: latency-svc-xnztd Jul 22 10:52:04.115: INFO: Got endpoints: latency-svc-xnztd [970.856803ms] Jul 22 10:52:04.138: INFO: Created: latency-svc-tk22n Jul 22 10:52:04.152: INFO: Got endpoints: latency-svc-tk22n [921.5652ms] Jul 22 10:52:04.204: INFO: Created: latency-svc-sftsl Jul 22 10:52:04.206: INFO: Got endpoints: latency-svc-sftsl [909.78228ms] Jul 22 10:52:04.246: INFO: Created: latency-svc-tsb7m Jul 22 10:52:04.266: INFO: Got endpoints: latency-svc-tsb7m [932.967122ms] Jul 22 10:52:04.300: INFO: Created: latency-svc-x774n Jul 22 10:52:04.347: INFO: Got endpoints: latency-svc-x774n [911.938445ms] Jul 22 10:52:04.504: INFO: Created: latency-svc-2z9qx Jul 22 10:52:04.560: INFO: Got endpoints: latency-svc-2z9qx [1.06586722s] Jul 22 10:52:04.690: INFO: Created: latency-svc-hwgfz Jul 22 10:52:04.717: INFO: Got endpoints: latency-svc-hwgfz [1.125374389s] Jul 22 10:52:04.739: INFO: Created: latency-svc-wnnz7 Jul 22 10:52:04.752: INFO: Got endpoints: latency-svc-wnnz7 [1.118685941s] Jul 22 10:52:04.861: INFO: Created: latency-svc-mwptg Jul 22 10:52:04.867: INFO: Got endpoints: latency-svc-mwptg [1.197186189s] Jul 22 10:52:04.895: INFO: Created: latency-svc-7lcbn Jul 22 10:52:04.909: INFO: Got endpoints: latency-svc-7lcbn [1.149336415s] Jul 22 10:52:05.091: INFO: Created: latency-svc-sm5jh Jul 22 10:52:05.094: INFO: Got endpoints: latency-svc-sm5jh [1.285801117s] Jul 22 10:52:05.550: INFO: Created: latency-svc-765rg Jul 22 10:52:05.755: INFO: Got endpoints: latency-svc-765rg [1.840596396s] Jul 22 10:52:05.825: INFO: Created: latency-svc-dfb92 Jul 22 10:52:05.844: INFO: Got endpoints: latency-svc-dfb92 [1.891679368s] Jul 22 10:52:06.053: INFO: Created: latency-svc-w8b9s Jul 22 10:52:06.084: INFO: Got endpoints: latency-svc-w8b9s [2.095205158s] Jul 22 10:52:06.246: INFO: Created: latency-svc-x44xv Jul 22 10:52:06.250: INFO: Got endpoints: latency-svc-x44xv [2.164921988s] Jul 22 10:52:06.281: INFO: Created: latency-svc-fg68z Jul 22 10:52:06.305: INFO: Got endpoints: latency-svc-fg68z [2.19038492s] Jul 22 10:52:06.341: INFO: Created: latency-svc-dtjkp Jul 22 10:52:06.407: INFO: Got endpoints: latency-svc-dtjkp [2.255801388s] Jul 22 10:52:06.455: INFO: Created: latency-svc-zpdlz Jul 22 10:52:06.476: INFO: Got endpoints: latency-svc-zpdlz [2.269275879s] Jul 22 10:52:06.557: INFO: Created: latency-svc-w9b6n Jul 22 10:52:06.594: INFO: Got endpoints: latency-svc-w9b6n [2.327735167s] Jul 22 10:52:06.624: INFO: Created: latency-svc-6lqsq Jul 22 10:52:06.637: INFO: Got endpoints: latency-svc-6lqsq [2.289435648s] Jul 22 10:52:06.695: INFO: Created: latency-svc-spkvg Jul 22 10:52:06.699: INFO: Got endpoints: latency-svc-spkvg [2.13809559s] Jul 22 10:52:06.726: INFO: Created: latency-svc-f7b6p Jul 22 10:52:06.740: INFO: Got endpoints: latency-svc-f7b6p [2.023191225s] Jul 22 10:52:06.762: INFO: Created: latency-svc-9fbqw Jul 22 10:52:06.776: INFO: Got endpoints: latency-svc-9fbqw [2.023796101s] Jul 22 10:52:06.851: INFO: Created: latency-svc-qjjvs Jul 22 10:52:06.880: INFO: Got endpoints: latency-svc-qjjvs [2.012846239s] Jul 22 10:52:06.906: INFO: Created: latency-svc-7j5v7 Jul 22 10:52:06.935: INFO: Got endpoints: latency-svc-7j5v7 [2.026302884s] Jul 22 10:52:07.001: INFO: Created: latency-svc-gbn8p Jul 22 10:52:07.003: INFO: Got endpoints: latency-svc-gbn8p [1.909389563s] Jul 22 10:52:07.062: INFO: Created: latency-svc-6kscr Jul 22 10:52:07.083: INFO: Got endpoints: latency-svc-6kscr [1.327437996s] Jul 22 10:52:07.175: INFO: Created: latency-svc-svvm2 Jul 22 10:52:07.179: INFO: Got endpoints: latency-svc-svvm2 [1.334574705s] Jul 22 10:52:07.217: INFO: Created: latency-svc-bhlbv Jul 22 10:52:07.228: INFO: Got endpoints: latency-svc-bhlbv [1.143590839s] Jul 22 10:52:07.260: INFO: Created: latency-svc-dflms Jul 22 10:52:07.270: INFO: Got endpoints: latency-svc-dflms [1.01969652s] Jul 22 10:52:07.336: INFO: Created: latency-svc-gpp7q Jul 22 10:52:07.373: INFO: Got endpoints: latency-svc-gpp7q [1.067873775s] Jul 22 10:52:07.392: INFO: Created: latency-svc-m4n2r Jul 22 10:52:07.403: INFO: Got endpoints: latency-svc-m4n2r [995.781815ms] Jul 22 10:52:07.434: INFO: Created: latency-svc-qznmx Jul 22 10:52:07.491: INFO: Got endpoints: latency-svc-qznmx [1.015494821s] Jul 22 10:52:07.524: INFO: Created: latency-svc-74jts Jul 22 10:52:07.536: INFO: Got endpoints: latency-svc-74jts [941.42123ms] Jul 22 10:52:07.578: INFO: Created: latency-svc-7s9h8 Jul 22 10:52:07.641: INFO: Got endpoints: latency-svc-7s9h8 [1.003880377s] Jul 22 10:52:07.643: INFO: Created: latency-svc-2n8fk Jul 22 10:52:07.656: INFO: Got endpoints: latency-svc-2n8fk [957.54119ms] Jul 22 10:52:07.723: INFO: Created: latency-svc-2ngnx Jul 22 10:52:07.734: INFO: Got endpoints: latency-svc-2ngnx [994.442256ms] Jul 22 10:52:07.785: INFO: Created: latency-svc-49hjz Jul 22 10:52:07.830: INFO: Got endpoints: latency-svc-49hjz [1.053733219s] Jul 22 10:52:07.885: INFO: Created: latency-svc-b8t6z Jul 22 10:52:07.952: INFO: Got endpoints: latency-svc-b8t6z [1.072421333s] Jul 22 10:52:07.974: INFO: Created: latency-svc-4wz65 Jul 22 10:52:07.987: INFO: Got endpoints: latency-svc-4wz65 [1.05135545s] Jul 22 10:52:08.016: INFO: Created: latency-svc-s2swl Jul 22 10:52:08.036: INFO: Got endpoints: latency-svc-s2swl [1.032324472s] Jul 22 10:52:08.120: INFO: Created: latency-svc-ccrrn Jul 22 10:52:08.124: INFO: Got endpoints: latency-svc-ccrrn [1.040890212s] Jul 22 10:52:08.154: INFO: Created: latency-svc-hhk8p Jul 22 10:52:08.174: INFO: Got endpoints: latency-svc-hhk8p [995.028431ms] Jul 22 10:52:08.214: INFO: Created: latency-svc-pcz85 Jul 22 10:52:08.259: INFO: Got endpoints: latency-svc-pcz85 [1.030860012s] Jul 22 10:52:08.268: INFO: Created: latency-svc-7vlp9 Jul 22 10:52:08.283: INFO: Got endpoints: latency-svc-7vlp9 [1.013185753s] Jul 22 10:52:08.305: INFO: Created: latency-svc-52vk8 Jul 22 10:52:08.333: INFO: Got endpoints: latency-svc-52vk8 [959.841545ms] Jul 22 10:52:08.395: INFO: Created: latency-svc-nm45t Jul 22 10:52:08.429: INFO: Got endpoints: latency-svc-nm45t [1.025973854s] Jul 22 10:52:08.460: INFO: Created: latency-svc-vl6ht Jul 22 10:52:08.469: INFO: Got endpoints: latency-svc-vl6ht [977.921232ms] Jul 22 10:52:08.489: INFO: Created: latency-svc-qnljx Jul 22 10:52:08.563: INFO: Got endpoints: latency-svc-qnljx [1.027254506s] Jul 22 10:52:08.566: INFO: Created: latency-svc-7zm9n Jul 22 10:52:08.578: INFO: Got endpoints: latency-svc-7zm9n [936.82411ms] Jul 22 10:52:08.598: INFO: Created: latency-svc-rk8tx Jul 22 10:52:08.614: INFO: Got endpoints: latency-svc-rk8tx [957.802183ms] Jul 22 10:52:08.646: INFO: Created: latency-svc-nftsj Jul 22 10:52:08.662: INFO: Got endpoints: latency-svc-nftsj [927.909722ms] Jul 22 10:52:08.719: INFO: Created: latency-svc-z9mn2 Jul 22 10:52:08.729: INFO: Got endpoints: latency-svc-z9mn2 [898.815354ms] Jul 22 10:52:08.777: INFO: Created: latency-svc-7c2f5 Jul 22 10:52:08.893: INFO: Got endpoints: latency-svc-7c2f5 [940.413058ms] Jul 22 10:52:08.898: INFO: Created: latency-svc-8knjh Jul 22 10:52:08.915: INFO: Got endpoints: latency-svc-8knjh [927.865119ms] Jul 22 10:52:08.958: INFO: Created: latency-svc-l94kg Jul 22 10:52:08.975: INFO: Got endpoints: latency-svc-l94kg [939.456528ms] Jul 22 10:52:09.109: INFO: Created: latency-svc-fpw2z Jul 22 10:52:09.126: INFO: Got endpoints: latency-svc-fpw2z [1.002014786s] Jul 22 10:52:09.144: INFO: Created: latency-svc-k42mq Jul 22 10:52:09.157: INFO: Got endpoints: latency-svc-k42mq [982.335611ms] Jul 22 10:52:09.192: INFO: Created: latency-svc-qrl25 Jul 22 10:52:09.288: INFO: Got endpoints: latency-svc-qrl25 [1.02893986s] Jul 22 10:52:09.289: INFO: Created: latency-svc-p7jp2 Jul 22 10:52:09.300: INFO: Got endpoints: latency-svc-p7jp2 [1.016967543s] Jul 22 10:52:09.324: INFO: Created: latency-svc-lfkrz Jul 22 10:52:09.343: INFO: Got endpoints: latency-svc-lfkrz [1.009754372s] Jul 22 10:52:09.498: INFO: Created: latency-svc-mb44c Jul 22 10:52:09.535: INFO: Created: latency-svc-qvfvf Jul 22 10:52:09.564: INFO: Created: latency-svc-2cvch Jul 22 10:52:09.564: INFO: Got endpoints: latency-svc-mb44c [1.135052783s] Jul 22 10:52:09.577: INFO: Got endpoints: latency-svc-2cvch [1.014421939s] Jul 22 10:52:09.647: INFO: Created: latency-svc-g4ts5 Jul 22 10:52:09.648: INFO: Got endpoints: latency-svc-qvfvf [1.178197916s] Jul 22 10:52:09.650: INFO: Got endpoints: latency-svc-g4ts5 [1.072420803s] Jul 22 10:52:09.691: INFO: Created: latency-svc-48hp9 Jul 22 10:52:09.703: INFO: Got endpoints: latency-svc-48hp9 [1.089309231s] Jul 22 10:52:09.726: INFO: Created: latency-svc-9bppw Jul 22 10:52:09.740: INFO: Got endpoints: latency-svc-9bppw [1.07753056s] Jul 22 10:52:09.815: INFO: Created: latency-svc-m42l4 Jul 22 10:52:09.834: INFO: Got endpoints: latency-svc-m42l4 [1.104862559s] Jul 22 10:52:09.871: INFO: Created: latency-svc-cgbvb Jul 22 10:52:09.878: INFO: Got endpoints: latency-svc-cgbvb [985.658114ms] Jul 22 10:52:09.900: INFO: Created: latency-svc-mjmjx Jul 22 10:52:09.982: INFO: Got endpoints: latency-svc-mjmjx [1.067471657s] Jul 22 10:52:09.985: INFO: Created: latency-svc-bbxw7 Jul 22 10:52:09.999: INFO: Got endpoints: latency-svc-bbxw7 [1.023869048s] Jul 22 10:52:10.038: INFO: Created: latency-svc-cdn58 Jul 22 10:52:10.053: INFO: Got endpoints: latency-svc-cdn58 [927.601862ms] Jul 22 10:52:10.075: INFO: Created: latency-svc-h58f2 Jul 22 10:52:10.132: INFO: Got endpoints: latency-svc-h58f2 [975.336779ms] Jul 22 10:52:10.134: INFO: Created: latency-svc-85pgk Jul 22 10:52:10.144: INFO: Got endpoints: latency-svc-85pgk [856.20221ms] Jul 22 10:52:10.177: INFO: Created: latency-svc-x8cm4 Jul 22 10:52:10.193: INFO: Got endpoints: latency-svc-x8cm4 [892.500946ms] Jul 22 10:52:10.219: INFO: Created: latency-svc-zqpvs Jul 22 10:52:10.282: INFO: Got endpoints: latency-svc-zqpvs [938.685869ms] Jul 22 10:52:10.302: INFO: Created: latency-svc-b8kpg Jul 22 10:52:10.313: INFO: Got endpoints: latency-svc-b8kpg [748.452336ms] Jul 22 10:52:10.335: INFO: Created: latency-svc-hplsn Jul 22 10:52:10.343: INFO: Got endpoints: latency-svc-hplsn [765.187344ms] Jul 22 10:52:10.368: INFO: Created: latency-svc-4dpc8 Jul 22 10:52:10.379: INFO: Got endpoints: latency-svc-4dpc8 [731.581255ms] Jul 22 10:52:10.464: INFO: Created: latency-svc-8s9dx Jul 22 10:52:10.487: INFO: Got endpoints: latency-svc-8s9dx [837.206109ms] Jul 22 10:52:10.530: INFO: Created: latency-svc-qh8sl Jul 22 10:52:10.695: INFO: Got endpoints: latency-svc-qh8sl [991.748005ms] Jul 22 10:52:10.704: INFO: Created: latency-svc-5mjls Jul 22 10:52:10.722: INFO: Got endpoints: latency-svc-5mjls [982.347977ms] Jul 22 10:52:10.782: INFO: Created: latency-svc-bldfd Jul 22 10:52:10.839: INFO: Got endpoints: latency-svc-bldfd [1.005107045s] Jul 22 10:52:10.844: INFO: Created: latency-svc-xgq6b Jul 22 10:52:10.865: INFO: Got endpoints: latency-svc-xgq6b [986.801029ms] Jul 22 10:52:10.890: INFO: Created: latency-svc-bpbhx Jul 22 10:52:10.903: INFO: Got endpoints: latency-svc-bpbhx [920.168238ms] Jul 22 10:52:10.919: INFO: Created: latency-svc-jfw2w Jul 22 10:52:10.933: INFO: Got endpoints: latency-svc-jfw2w [933.645493ms] Jul 22 10:52:10.982: INFO: Created: latency-svc-92vmf Jul 22 10:52:10.993: INFO: Got endpoints: latency-svc-92vmf [939.618244ms] Jul 22 10:52:11.010: INFO: Created: latency-svc-xx77p Jul 22 10:52:11.025: INFO: Got endpoints: latency-svc-xx77p [893.006307ms] Jul 22 10:52:11.046: INFO: Created: latency-svc-7zbfx Jul 22 10:52:11.060: INFO: Got endpoints: latency-svc-7zbfx [915.682787ms] Jul 22 10:52:11.081: INFO: Created: latency-svc-hqpdl Jul 22 10:52:11.144: INFO: Got endpoints: latency-svc-hqpdl [951.5295ms] Jul 22 10:52:11.151: INFO: Created: latency-svc-8dpfv Jul 22 10:52:11.156: INFO: Got endpoints: latency-svc-8dpfv [874.407417ms] Jul 22 10:52:11.184: INFO: Created: latency-svc-swdhd Jul 22 10:52:11.217: INFO: Got endpoints: latency-svc-swdhd [903.851915ms] Jul 22 10:52:11.294: INFO: Created: latency-svc-k2thw Jul 22 10:52:11.296: INFO: Got endpoints: latency-svc-k2thw [953.517502ms] Jul 22 10:52:11.334: INFO: Created: latency-svc-9wvjz Jul 22 10:52:11.350: INFO: Got endpoints: latency-svc-9wvjz [970.412792ms] Jul 22 10:52:11.394: INFO: Created: latency-svc-2djhq Jul 22 10:52:11.467: INFO: Got endpoints: latency-svc-2djhq [979.712186ms] Jul 22 10:52:11.469: INFO: Created: latency-svc-gxwhj Jul 22 10:52:11.502: INFO: Got endpoints: latency-svc-gxwhj [806.338309ms] Jul 22 10:52:11.532: INFO: Created: latency-svc-2jbmn Jul 22 10:52:11.549: INFO: Got endpoints: latency-svc-2jbmn [826.980948ms] Jul 22 10:52:11.611: INFO: Created: latency-svc-w2248 Jul 22 10:52:11.614: INFO: Got endpoints: latency-svc-w2248 [774.758898ms] Jul 22 10:52:11.652: INFO: Created: latency-svc-jwr2r Jul 22 10:52:11.662: INFO: Got endpoints: latency-svc-jwr2r [797.198637ms] Jul 22 10:52:11.688: INFO: Created: latency-svc-q2rtf Jul 22 10:52:11.785: INFO: Got endpoints: latency-svc-q2rtf [882.276419ms] Jul 22 10:52:11.787: INFO: Created: latency-svc-m8hnq Jul 22 10:52:11.795: INFO: Got endpoints: latency-svc-m8hnq [861.760155ms] Jul 22 10:52:11.820: INFO: Created: latency-svc-74pmx Jul 22 10:52:11.834: INFO: Got endpoints: latency-svc-74pmx [841.269263ms] Jul 22 10:52:11.849: INFO: Created: latency-svc-mdtth Jul 22 10:52:11.874: INFO: Got endpoints: latency-svc-mdtth [848.930937ms] Jul 22 10:52:11.953: INFO: Created: latency-svc-p5czc Jul 22 10:52:11.970: INFO: Got endpoints: latency-svc-p5czc [910.201753ms] Jul 22 10:52:12.024: INFO: Created: latency-svc-79fzh Jul 22 10:52:12.084: INFO: Got endpoints: latency-svc-79fzh [940.275342ms] Jul 22 10:52:12.090: INFO: Created: latency-svc-rjcps Jul 22 10:52:12.145: INFO: Got endpoints: latency-svc-rjcps [988.237792ms] Jul 22 10:52:12.492: INFO: Created: latency-svc-kwmjb Jul 22 10:52:12.665: INFO: Got endpoints: latency-svc-kwmjb [1.44833762s] Jul 22 10:52:12.669: INFO: Created: latency-svc-d45mk Jul 22 10:52:12.691: INFO: Got endpoints: latency-svc-d45mk [1.394294195s] Jul 22 10:52:12.732: INFO: Created: latency-svc-6l6qz Jul 22 10:52:12.744: INFO: Got endpoints: latency-svc-6l6qz [1.394539175s] Jul 22 10:52:12.900: INFO: Created: latency-svc-bdsks Jul 22 10:52:12.902: INFO: Got endpoints: latency-svc-bdsks [1.434770781s] Jul 22 10:52:13.206: INFO: Created: latency-svc-mjn9k Jul 22 10:52:13.294: INFO: Got endpoints: latency-svc-mjn9k [1.79209925s] Jul 22 10:52:13.332: INFO: Created: latency-svc-pvs5g Jul 22 10:52:13.345: INFO: Got endpoints: latency-svc-pvs5g [1.795311198s] Jul 22 10:52:13.392: INFO: Created: latency-svc-qngsc Jul 22 10:52:13.474: INFO: Got endpoints: latency-svc-qngsc [1.860081876s] Jul 22 10:52:13.475: INFO: Created: latency-svc-z2rd7 Jul 22 10:52:13.494: INFO: Got endpoints: latency-svc-z2rd7 [1.832001029s] Jul 22 10:52:13.521: INFO: Created: latency-svc-zlmr2 Jul 22 10:52:13.542: INFO: Got endpoints: latency-svc-zlmr2 [1.756885504s] Jul 22 10:52:13.566: INFO: Created: latency-svc-9vgdt Jul 22 10:52:13.605: INFO: Got endpoints: latency-svc-9vgdt [1.81021654s] Jul 22 10:52:13.632: INFO: Created: latency-svc-n4fd2 Jul 22 10:52:13.645: INFO: Got endpoints: latency-svc-n4fd2 [1.81074171s] Jul 22 10:52:13.662: INFO: Created: latency-svc-9r6g2 Jul 22 10:52:13.676: INFO: Got endpoints: latency-svc-9r6g2 [1.80186505s] Jul 22 10:52:13.704: INFO: Created: latency-svc-rjj6l Jul 22 10:52:13.761: INFO: Got endpoints: latency-svc-rjj6l [1.79119619s] Jul 22 10:52:13.800: INFO: Created: latency-svc-ndvtq Jul 22 10:52:13.916: INFO: Got endpoints: latency-svc-ndvtq [1.831863922s] Jul 22 10:52:13.918: INFO: Created: latency-svc-k92ql Jul 22 10:52:13.928: INFO: Got endpoints: latency-svc-k92ql [1.783436888s] Jul 22 10:52:13.986: INFO: Created: latency-svc-7prt5 Jul 22 10:52:14.001: INFO: Got endpoints: latency-svc-7prt5 [1.335949345s] Jul 22 10:52:14.067: INFO: Created: latency-svc-5pg7l Jul 22 10:52:14.082: INFO: Got endpoints: latency-svc-5pg7l [1.391077244s] Jul 22 10:52:14.125: INFO: Created: latency-svc-qpg7g Jul 22 10:52:14.152: INFO: Got endpoints: latency-svc-qpg7g [1.40724101s] Jul 22 10:52:14.217: INFO: Created: latency-svc-dkhgl Jul 22 10:52:14.224: INFO: Got endpoints: latency-svc-dkhgl [1.321498648s] Jul 22 10:52:14.250: INFO: Created: latency-svc-6mh8r Jul 22 10:52:14.266: INFO: Got endpoints: latency-svc-6mh8r [971.989535ms] Jul 22 10:52:14.298: INFO: Created: latency-svc-7bfs8 Jul 22 10:52:14.315: INFO: Got endpoints: latency-svc-7bfs8 [970.579501ms] Jul 22 10:52:14.373: INFO: Created: latency-svc-g596b Jul 22 10:52:14.386: INFO: Got endpoints: latency-svc-g596b [912.373966ms] Jul 22 10:52:14.419: INFO: Created: latency-svc-8f74m Jul 22 10:52:14.441: INFO: Got endpoints: latency-svc-8f74m [946.564337ms] Jul 22 10:52:14.522: INFO: Created: latency-svc-tql4t Jul 22 10:52:14.525: INFO: Got endpoints: latency-svc-tql4t [983.12063ms] Jul 22 10:52:14.556: INFO: Created: latency-svc-mp89m Jul 22 10:52:14.567: INFO: Got endpoints: latency-svc-mp89m [962.111613ms] Jul 22 10:52:14.586: INFO: Created: latency-svc-tx7hp Jul 22 10:52:14.597: INFO: Got endpoints: latency-svc-tx7hp [951.782062ms] Jul 22 10:52:14.617: INFO: Created: latency-svc-l8n9m Jul 22 10:52:14.659: INFO: Got endpoints: latency-svc-l8n9m [983.226418ms] Jul 22 10:52:14.676: INFO: Created: latency-svc-cm6h6 Jul 22 10:52:14.689: INFO: Got endpoints: latency-svc-cm6h6 [927.533427ms] Jul 22 10:52:14.712: INFO: Created: latency-svc-fbslk Jul 22 10:52:14.724: INFO: Got endpoints: latency-svc-fbslk [807.619327ms] Jul 22 10:52:14.804: INFO: Created: latency-svc-9nws7 Jul 22 10:52:14.806: INFO: Got endpoints: latency-svc-9nws7 [877.67889ms] Jul 22 10:52:14.845: INFO: Created: latency-svc-rcxkv Jul 22 10:52:14.880: INFO: Got endpoints: latency-svc-rcxkv [878.394129ms] Jul 22 10:52:14.960: INFO: Created: latency-svc-9jxb9 Jul 22 10:52:14.972: INFO: Got endpoints: latency-svc-9jxb9 [889.676024ms] Jul 22 10:52:15.000: INFO: Created: latency-svc-lxcn6 Jul 22 10:52:15.013: INFO: Got endpoints: latency-svc-lxcn6 [861.745063ms] Jul 22 10:52:15.048: INFO: Created: latency-svc-rcwnf Jul 22 10:52:15.090: INFO: Got endpoints: latency-svc-rcwnf [866.250643ms] Jul 22 10:52:15.108: INFO: Created: latency-svc-ws7qq Jul 22 10:52:15.122: INFO: Got endpoints: latency-svc-ws7qq [856.030577ms] Jul 22 10:52:15.122: INFO: Latencies: [78.014206ms 87.647918ms 140.50322ms 239.51661ms 268.387164ms 458.698219ms 525.008368ms 609.966014ms 725.349457ms 731.581255ms 748.452336ms 765.187344ms 774.758898ms 789.41056ms 797.198637ms 806.338309ms 807.619327ms 826.980948ms 837.206109ms 841.269263ms 848.930937ms 856.030577ms 856.20221ms 861.745063ms 861.760155ms 866.250643ms 874.407417ms 877.67889ms 878.394129ms 882.276419ms 889.676024ms 892.500946ms 893.006307ms 898.815354ms 903.851915ms 908.17416ms 909.78228ms 910.201753ms 911.938445ms 912.373966ms 915.682787ms 920.168238ms 921.5652ms 927.533427ms 927.601862ms 927.865119ms 927.909722ms 932.967122ms 933.645493ms 936.82411ms 938.685869ms 939.456528ms 939.618244ms 940.275342ms 940.413058ms 941.42123ms 946.564337ms 951.5295ms 951.782062ms 953.517502ms 957.54119ms 957.802183ms 959.841545ms 962.111613ms 970.412792ms 970.579501ms 970.856803ms 971.989535ms 975.336779ms 977.921232ms 979.712186ms 982.335611ms 982.347977ms 983.12063ms 983.226418ms 985.658114ms 986.801029ms 988.237792ms 991.748005ms 994.442256ms 995.028431ms 995.781815ms 1.002014786s 1.003880377s 1.005107045s 1.009754372s 1.013185753s 1.014421939s 1.015494821s 1.016967543s 1.018108209s 1.01969652s 1.023869048s 1.025780334s 1.025973854s 1.027254506s 1.02893986s 1.030860012s 1.032324472s 1.040890212s 1.046904812s 1.047309963s 1.05135545s 1.053733219s 1.057316893s 1.061272666s 1.06475771s 1.06586722s 1.065959123s 1.067471657s 1.06752086s 1.067873775s 1.070059323s 1.071054596s 1.072420803s 1.072421333s 1.077177872s 1.077488554s 1.07753056s 1.078158227s 1.08253025s 1.084440632s 1.089309231s 1.092724445s 1.093233926s 1.104862559s 1.118685941s 1.12156491s 1.125374389s 1.135052783s 1.140924342s 1.143590839s 1.149336415s 1.178197916s 1.178691101s 1.197186189s 1.217899321s 1.227158406s 1.239438171s 1.244915135s 1.25503934s 1.257920575s 1.260357734s 1.268417447s 1.270508931s 1.276418547s 1.278845217s 1.281439397s 1.283828164s 1.285801117s 1.28847125s 1.288800249s 1.293235382s 1.298998047s 1.303449466s 1.30480674s 1.310456021s 1.311238755s 1.321498648s 1.32677153s 1.327437996s 1.332499057s 1.333006359s 1.334574705s 1.335949345s 1.350005308s 1.378228619s 1.389907553s 1.391077244s 1.394294195s 1.394539175s 1.40724101s 1.434770781s 1.44833762s 1.756885504s 1.783436888s 1.79119619s 1.79209925s 1.795311198s 1.80186505s 1.81021654s 1.81074171s 1.831863922s 1.832001029s 1.840596396s 1.860081876s 1.891679368s 1.909389563s 2.012846239s 2.023191225s 2.023796101s 2.026302884s 2.095205158s 2.13809559s 2.164921988s 2.19038492s 2.255801388s 2.269275879s 2.289435648s 2.327735167s] Jul 22 10:52:15.122: INFO: 50 %ile: 1.046904812s Jul 22 10:52:15.122: INFO: 90 %ile: 1.81021654s Jul 22 10:52:15.122: INFO: 99 %ile: 2.289435648s Jul 22 10:52:15.122: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 22 10:52:15.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-svc-latency-bsksl" for this suite. Jul 22 10:52:51.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 22 10:52:51.277: INFO: namespace: e2e-tests-svc-latency-bsksl, resource: bindings, ignored listing per whitelist Jul 22 10:52:51.286: INFO: namespace e2e-tests-svc-latency-bsksl deletion completed in 36.157008103s • [SLOW TEST:55.157 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 22 10:52:51.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 STEP: creating secret e2e-tests-secrets-mdv22/secret-test-7d256e80-cc09-11ea-aa05-0242ac11000b STEP: Creating a pod to test consume secrets Jul 22 10:52:51.514: INFO: Waiting up to 5m0s for pod "pod-configmaps-7d2b99cb-cc09-11ea-aa05-0242ac11000b" in namespace "e2e-tests-secrets-mdv22" to be "success or failure" Jul 22 10:52:51.534: INFO: Pod "pod-configmaps-7d2b99cb-cc09-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.005525ms Jul 22 10:52:53.537: INFO: Pod "pod-configmaps-7d2b99cb-cc09-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023239035s Jul 22 10:52:55.541: INFO: Pod "pod-configmaps-7d2b99cb-cc09-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026777537s STEP: Saw pod success Jul 22 10:52:55.541: INFO: Pod "pod-configmaps-7d2b99cb-cc09-11ea-aa05-0242ac11000b" satisfied condition "success or failure" Jul 22 10:52:55.543: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-7d2b99cb-cc09-11ea-aa05-0242ac11000b container env-test: STEP: delete the pod Jul 22 10:52:55.570: INFO: Waiting for pod pod-configmaps-7d2b99cb-cc09-11ea-aa05-0242ac11000b to disappear Jul 22 10:52:55.615: INFO: Pod pod-configmaps-7d2b99cb-cc09-11ea-aa05-0242ac11000b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 Jul 22 10:52:55.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-secrets-mdv22" for this suite. Jul 22 10:53:03.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 22 10:53:03.673: INFO: namespace: e2e-tests-secrets-mdv22, resource: bindings, ignored listing per whitelist Jul 22 10:53:03.700: INFO: namespace e2e-tests-secrets-mdv22 deletion completed in 8.08150647s • [SLOW TEST:12.414 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 STEP: Creating a kubernetes client Jul 22 10:53:03.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 Jul 22 10:53:04.405: INFO: (0) /api/v1/nodes/hunter-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jul 22 10:53:11.411: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 10:53:11.414: INFO: Number of nodes with available pods: 0
Jul 22 10:53:11.414: INFO: Node hunter-worker is running more than one daemon pod
Jul 22 10:53:12.419: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 10:53:12.423: INFO: Number of nodes with available pods: 0
Jul 22 10:53:12.423: INFO: Node hunter-worker is running more than one daemon pod
Jul 22 10:53:13.841: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 10:53:13.844: INFO: Number of nodes with available pods: 0
Jul 22 10:53:13.844: INFO: Node hunter-worker is running more than one daemon pod
Jul 22 10:53:14.419: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 10:53:14.423: INFO: Number of nodes with available pods: 0
Jul 22 10:53:14.423: INFO: Node hunter-worker is running more than one daemon pod
Jul 22 10:53:15.560: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 10:53:15.633: INFO: Number of nodes with available pods: 0
Jul 22 10:53:15.633: INFO: Node hunter-worker is running more than one daemon pod
Jul 22 10:53:16.420: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 10:53:16.423: INFO: Number of nodes with available pods: 2
Jul 22 10:53:16.423: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jul 22 10:53:16.477: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 10:53:16.479: INFO: Number of nodes with available pods: 1
Jul 22 10:53:16.479: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 22 10:53:17.484: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 10:53:17.487: INFO: Number of nodes with available pods: 1
Jul 22 10:53:17.487: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 22 10:53:18.485: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 10:53:18.488: INFO: Number of nodes with available pods: 1
Jul 22 10:53:18.488: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 22 10:53:19.483: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 10:53:19.485: INFO: Number of nodes with available pods: 1
Jul 22 10:53:19.485: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 22 10:53:20.484: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 10:53:20.487: INFO: Number of nodes with available pods: 1
Jul 22 10:53:20.487: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 22 10:53:21.484: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 10:53:21.487: INFO: Number of nodes with available pods: 1
Jul 22 10:53:21.487: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 22 10:53:22.484: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 10:53:22.488: INFO: Number of nodes with available pods: 1
Jul 22 10:53:22.488: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 22 10:53:23.484: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 10:53:23.488: INFO: Number of nodes with available pods: 1
Jul 22 10:53:23.488: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 22 10:53:24.485: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 10:53:24.488: INFO: Number of nodes with available pods: 2
Jul 22 10:53:24.488: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-rz7qq, will wait for the garbage collector to delete the pods
Jul 22 10:53:24.554: INFO: Deleting DaemonSet.extensions daemon-set took: 9.974853ms
Jul 22 10:53:24.654: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.234399ms
Jul 22 10:53:37.657: INFO: Number of nodes with available pods: 0
Jul 22 10:53:37.657: INFO: Number of running nodes: 0, number of available pods: 0
Jul 22 10:53:37.659: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-rz7qq/daemonsets","resourceVersion":"2170484"},"items":null}

Jul 22 10:53:37.705: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-rz7qq/pods","resourceVersion":"2170485"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 10:53:37.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-rz7qq" for this suite.
Jul 22 10:53:43.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 10:53:43.929: INFO: namespace: e2e-tests-daemonsets-rz7qq, resource: bindings, ignored listing per whitelist
Jul 22 10:53:43.957: INFO: namespace e2e-tests-daemonsets-rz7qq deletion completed in 6.239181504s

• [SLOW TEST:32.828 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 10:53:43.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-9c90df63-cc09-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 22 10:53:44.201: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9c9282a6-cc09-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-9t7gz" to be "success or failure"
Jul 22 10:53:44.679: INFO: Pod "pod-projected-configmaps-9c9282a6-cc09-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 477.454447ms
Jul 22 10:53:46.683: INFO: Pod "pod-projected-configmaps-9c9282a6-cc09-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.481361172s
Jul 22 10:53:48.687: INFO: Pod "pod-projected-configmaps-9c9282a6-cc09-11ea-aa05-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.485723397s
Jul 22 10:53:50.691: INFO: Pod "pod-projected-configmaps-9c9282a6-cc09-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.489885834s
STEP: Saw pod success
Jul 22 10:53:50.691: INFO: Pod "pod-projected-configmaps-9c9282a6-cc09-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 10:53:50.694: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-9c9282a6-cc09-11ea-aa05-0242ac11000b container projected-configmap-volume-test: 
STEP: delete the pod
Jul 22 10:53:50.807: INFO: Waiting for pod pod-projected-configmaps-9c9282a6-cc09-11ea-aa05-0242ac11000b to disappear
Jul 22 10:53:50.812: INFO: Pod pod-projected-configmaps-9c9282a6-cc09-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 10:53:50.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-9t7gz" for this suite.
Jul 22 10:53:56.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 10:53:56.863: INFO: namespace: e2e-tests-projected-9t7gz, resource: bindings, ignored listing per whitelist
Jul 22 10:53:56.916: INFO: namespace e2e-tests-projected-9t7gz deletion completed in 6.099364882s

• [SLOW TEST:12.958 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 10:53:56.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-a43706c1-cc09-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 22 10:53:57.031: INFO: Waiting up to 5m0s for pod "pod-configmaps-a437b0ba-cc09-11ea-aa05-0242ac11000b" in namespace "e2e-tests-configmap-lb5zg" to be "success or failure"
Jul 22 10:53:57.074: INFO: Pod "pod-configmaps-a437b0ba-cc09-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 42.842135ms
Jul 22 10:53:59.078: INFO: Pod "pod-configmaps-a437b0ba-cc09-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046601662s
Jul 22 10:54:01.083: INFO: Pod "pod-configmaps-a437b0ba-cc09-11ea-aa05-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.051464057s
Jul 22 10:54:03.087: INFO: Pod "pod-configmaps-a437b0ba-cc09-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055444739s
STEP: Saw pod success
Jul 22 10:54:03.087: INFO: Pod "pod-configmaps-a437b0ba-cc09-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 10:54:03.090: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-a437b0ba-cc09-11ea-aa05-0242ac11000b container configmap-volume-test: 
STEP: delete the pod
Jul 22 10:54:03.113: INFO: Waiting for pod pod-configmaps-a437b0ba-cc09-11ea-aa05-0242ac11000b to disappear
Jul 22 10:54:03.118: INFO: Pod pod-configmaps-a437b0ba-cc09-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 10:54:03.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-lb5zg" for this suite.
Jul 22 10:54:09.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 10:54:09.174: INFO: namespace: e2e-tests-configmap-lb5zg, resource: bindings, ignored listing per whitelist
Jul 22 10:54:09.191: INFO: namespace e2e-tests-configmap-lb5zg deletion completed in 6.070105606s

• [SLOW TEST:12.275 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 10:54:09.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-2ldk4
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 22 10:54:09.293: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 22 10:54:37.558: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.20:8080/dial?request=hostName&protocol=http&host=10.244.2.19&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-2ldk4 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 22 10:54:37.558: INFO: >>> kubeConfig: /root/.kube/config
I0722 10:54:37.591472       7 log.go:172] (0xc000508e70) (0xc001afe960) Create stream
I0722 10:54:37.591509       7 log.go:172] (0xc000508e70) (0xc001afe960) Stream added, broadcasting: 1
I0722 10:54:37.594417       7 log.go:172] (0xc000508e70) Reply frame received for 1
I0722 10:54:37.594478       7 log.go:172] (0xc000508e70) (0xc001afea00) Create stream
I0722 10:54:37.594495       7 log.go:172] (0xc000508e70) (0xc001afea00) Stream added, broadcasting: 3
I0722 10:54:37.595551       7 log.go:172] (0xc000508e70) Reply frame received for 3
I0722 10:54:37.595596       7 log.go:172] (0xc000508e70) (0xc001f92960) Create stream
I0722 10:54:37.595611       7 log.go:172] (0xc000508e70) (0xc001f92960) Stream added, broadcasting: 5
I0722 10:54:37.596527       7 log.go:172] (0xc000508e70) Reply frame received for 5
I0722 10:54:37.719279       7 log.go:172] (0xc000508e70) Data frame received for 3
I0722 10:54:37.719300       7 log.go:172] (0xc001afea00) (3) Data frame handling
I0722 10:54:37.719311       7 log.go:172] (0xc001afea00) (3) Data frame sent
I0722 10:54:37.720108       7 log.go:172] (0xc000508e70) Data frame received for 3
I0722 10:54:37.720183       7 log.go:172] (0xc001afea00) (3) Data frame handling
I0722 10:54:37.720440       7 log.go:172] (0xc000508e70) Data frame received for 5
I0722 10:54:37.720480       7 log.go:172] (0xc001f92960) (5) Data frame handling
I0722 10:54:37.722391       7 log.go:172] (0xc000508e70) Data frame received for 1
I0722 10:54:37.722403       7 log.go:172] (0xc001afe960) (1) Data frame handling
I0722 10:54:37.722409       7 log.go:172] (0xc001afe960) (1) Data frame sent
I0722 10:54:37.722418       7 log.go:172] (0xc000508e70) (0xc001afe960) Stream removed, broadcasting: 1
I0722 10:54:37.722427       7 log.go:172] (0xc000508e70) Go away received
I0722 10:54:37.722569       7 log.go:172] (0xc000508e70) (0xc001afe960) Stream removed, broadcasting: 1
I0722 10:54:37.722605       7 log.go:172] (0xc000508e70) (0xc001afea00) Stream removed, broadcasting: 3
I0722 10:54:37.722632       7 log.go:172] (0xc000508e70) (0xc001f92960) Stream removed, broadcasting: 5
Jul 22 10:54:37.722: INFO: Waiting for endpoints: map[]
Jul 22 10:54:37.726: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.20:8080/dial?request=hostName&protocol=http&host=10.244.1.244&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-2ldk4 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 22 10:54:37.726: INFO: >>> kubeConfig: /root/.kube/config
I0722 10:54:37.953686       7 log.go:172] (0xc0010942c0) (0xc002303040) Create stream
I0722 10:54:37.953748       7 log.go:172] (0xc0010942c0) (0xc002303040) Stream added, broadcasting: 1
I0722 10:54:37.955874       7 log.go:172] (0xc0010942c0) Reply frame received for 1
I0722 10:54:37.955904       7 log.go:172] (0xc0010942c0) (0xc001f92a00) Create stream
I0722 10:54:37.955916       7 log.go:172] (0xc0010942c0) (0xc001f92a00) Stream added, broadcasting: 3
I0722 10:54:37.956966       7 log.go:172] (0xc0010942c0) Reply frame received for 3
I0722 10:54:37.957007       7 log.go:172] (0xc0010942c0) (0xc0023030e0) Create stream
I0722 10:54:37.957022       7 log.go:172] (0xc0010942c0) (0xc0023030e0) Stream added, broadcasting: 5
I0722 10:54:37.958024       7 log.go:172] (0xc0010942c0) Reply frame received for 5
I0722 10:54:38.037350       7 log.go:172] (0xc0010942c0) Data frame received for 3
I0722 10:54:38.037377       7 log.go:172] (0xc001f92a00) (3) Data frame handling
I0722 10:54:38.037412       7 log.go:172] (0xc001f92a00) (3) Data frame sent
I0722 10:54:38.037865       7 log.go:172] (0xc0010942c0) Data frame received for 3
I0722 10:54:38.037891       7 log.go:172] (0xc001f92a00) (3) Data frame handling
I0722 10:54:38.038105       7 log.go:172] (0xc0010942c0) Data frame received for 5
I0722 10:54:38.038129       7 log.go:172] (0xc0023030e0) (5) Data frame handling
I0722 10:54:38.039673       7 log.go:172] (0xc0010942c0) Data frame received for 1
I0722 10:54:38.039688       7 log.go:172] (0xc002303040) (1) Data frame handling
I0722 10:54:38.039697       7 log.go:172] (0xc002303040) (1) Data frame sent
I0722 10:54:38.039708       7 log.go:172] (0xc0010942c0) (0xc002303040) Stream removed, broadcasting: 1
I0722 10:54:38.039763       7 log.go:172] (0xc0010942c0) Go away received
I0722 10:54:38.039794       7 log.go:172] (0xc0010942c0) (0xc002303040) Stream removed, broadcasting: 1
I0722 10:54:38.039827       7 log.go:172] (0xc0010942c0) (0xc001f92a00) Stream removed, broadcasting: 3
I0722 10:54:38.039837       7 log.go:172] (0xc0010942c0) (0xc0023030e0) Stream removed, broadcasting: 5
Jul 22 10:54:38.039: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 10:54:38.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-2ldk4" for this suite.
Jul 22 10:55:02.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 10:55:02.556: INFO: namespace: e2e-tests-pod-network-test-2ldk4, resource: bindings, ignored listing per whitelist
Jul 22 10:55:02.777: INFO: namespace e2e-tests-pod-network-test-2ldk4 deletion completed in 24.581242766s

• [SLOW TEST:53.586 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 10:55:02.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 10:55:10.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-cf4pp" for this suite.
Jul 22 10:55:50.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 10:55:50.401: INFO: namespace: e2e-tests-kubelet-test-cf4pp, resource: bindings, ignored listing per whitelist
Jul 22 10:55:50.403: INFO: namespace e2e-tests-kubelet-test-cf4pp deletion completed in 40.154864132s

• [SLOW TEST:47.625 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 10:55:50.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 10:55:50.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-dzz8z" for this suite.
Jul 22 10:55:56.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 10:55:56.654: INFO: namespace: e2e-tests-services-dzz8z, resource: bindings, ignored listing per whitelist
Jul 22 10:55:56.707: INFO: namespace e2e-tests-services-dzz8z deletion completed in 6.163458989s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:6.304 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 10:55:56.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on tmpfs
Jul 22 10:55:56.814: INFO: Waiting up to 5m0s for pod "pod-eb9ac790-cc09-11ea-aa05-0242ac11000b" in namespace "e2e-tests-emptydir-gd8jg" to be "success or failure"
Jul 22 10:55:56.822: INFO: Pod "pod-eb9ac790-cc09-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.287653ms
Jul 22 10:55:58.835: INFO: Pod "pod-eb9ac790-cc09-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020456723s
Jul 22 10:56:00.838: INFO: Pod "pod-eb9ac790-cc09-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023877061s
STEP: Saw pod success
Jul 22 10:56:00.838: INFO: Pod "pod-eb9ac790-cc09-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 10:56:00.841: INFO: Trying to get logs from node hunter-worker2 pod pod-eb9ac790-cc09-11ea-aa05-0242ac11000b container test-container: 
STEP: delete the pod
Jul 22 10:56:00.911: INFO: Waiting for pod pod-eb9ac790-cc09-11ea-aa05-0242ac11000b to disappear
Jul 22 10:56:00.939: INFO: Pod pod-eb9ac790-cc09-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 10:56:00.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-gd8jg" for this suite.
Jul 22 10:56:07.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 10:56:07.228: INFO: namespace: e2e-tests-emptydir-gd8jg, resource: bindings, ignored listing per whitelist
Jul 22 10:56:07.275: INFO: namespace e2e-tests-emptydir-gd8jg deletion completed in 6.331974187s

• [SLOW TEST:10.567 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 10:56:07.275: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 22 10:56:07.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-hsg98'
Jul 22 10:56:07.745: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 22 10:56:07.745: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
Jul 22 10:56:11.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-hsg98'
Jul 22 10:56:11.979: INFO: stderr: ""
Jul 22 10:56:11.979: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 10:56:11.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hsg98" for this suite.
Jul 22 10:56:18.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 10:56:18.157: INFO: namespace: e2e-tests-kubectl-hsg98, resource: bindings, ignored listing per whitelist
Jul 22 10:56:18.213: INFO: namespace e2e-tests-kubectl-hsg98 deletion completed in 6.197057127s

• [SLOW TEST:10.938 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 10:56:18.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 22 10:56:19.045: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f8d8d05a-cc09-11ea-aa05-0242ac11000b" in namespace "e2e-tests-downward-api-sfdnb" to be "success or failure"
Jul 22 10:56:19.207: INFO: Pod "downwardapi-volume-f8d8d05a-cc09-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 161.908871ms
Jul 22 10:56:21.232: INFO: Pod "downwardapi-volume-f8d8d05a-cc09-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186733245s
Jul 22 10:56:23.242: INFO: Pod "downwardapi-volume-f8d8d05a-cc09-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.196330636s
STEP: Saw pod success
Jul 22 10:56:23.242: INFO: Pod "downwardapi-volume-f8d8d05a-cc09-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 10:56:23.245: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-f8d8d05a-cc09-11ea-aa05-0242ac11000b container client-container: 
STEP: delete the pod
Jul 22 10:56:23.264: INFO: Waiting for pod downwardapi-volume-f8d8d05a-cc09-11ea-aa05-0242ac11000b to disappear
Jul 22 10:56:23.268: INFO: Pod downwardapi-volume-f8d8d05a-cc09-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 10:56:23.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-sfdnb" for this suite.
Jul 22 10:56:29.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 10:56:29.437: INFO: namespace: e2e-tests-downward-api-sfdnb, resource: bindings, ignored listing per whitelist
Jul 22 10:56:29.437: INFO: namespace e2e-tests-downward-api-sfdnb deletion completed in 6.165298179s

• [SLOW TEST:11.224 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 10:56:29.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the initial replication controller
Jul 22 10:56:29.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-lrsdb'
Jul 22 10:56:30.350: INFO: stderr: ""
Jul 22 10:56:30.350: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 22 10:56:30.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lrsdb'
Jul 22 10:56:30.473: INFO: stderr: ""
Jul 22 10:56:30.473: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Jul 22 10:56:35.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lrsdb'
Jul 22 10:56:35.581: INFO: stderr: ""
Jul 22 10:56:35.581: INFO: stdout: "update-demo-nautilus-8bggx update-demo-nautilus-h2724 "
Jul 22 10:56:35.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8bggx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lrsdb'
Jul 22 10:56:35.698: INFO: stderr: ""
Jul 22 10:56:35.698: INFO: stdout: "true"
Jul 22 10:56:35.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8bggx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lrsdb'
Jul 22 10:56:35.791: INFO: stderr: ""
Jul 22 10:56:35.791: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 22 10:56:35.791: INFO: validating pod update-demo-nautilus-8bggx
Jul 22 10:56:35.795: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 22 10:56:35.795: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 22 10:56:35.795: INFO: update-demo-nautilus-8bggx is verified up and running
Jul 22 10:56:35.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h2724 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lrsdb'
Jul 22 10:56:35.895: INFO: stderr: ""
Jul 22 10:56:35.895: INFO: stdout: "true"
Jul 22 10:56:35.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h2724 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lrsdb'
Jul 22 10:56:35.989: INFO: stderr: ""
Jul 22 10:56:35.989: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 22 10:56:35.989: INFO: validating pod update-demo-nautilus-h2724
Jul 22 10:56:35.993: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 22 10:56:35.993: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 22 10:56:35.993: INFO: update-demo-nautilus-h2724 is verified up and running
STEP: rolling-update to new replication controller
Jul 22 10:56:35.995: INFO: scanned /root for discovery docs: 
Jul 22 10:56:35.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-lrsdb'
Jul 22 10:56:59.259: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jul 22 10:56:59.259: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 22 10:56:59.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-lrsdb'
Jul 22 10:56:59.358: INFO: stderr: ""
Jul 22 10:56:59.358: INFO: stdout: "update-demo-kitten-sx2vx update-demo-kitten-tzcj7 "
Jul 22 10:56:59.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sx2vx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lrsdb'
Jul 22 10:56:59.448: INFO: stderr: ""
Jul 22 10:56:59.448: INFO: stdout: "true"
Jul 22 10:56:59.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-sx2vx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lrsdb'
Jul 22 10:56:59.544: INFO: stderr: ""
Jul 22 10:56:59.544: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jul 22 10:56:59.544: INFO: validating pod update-demo-kitten-sx2vx
Jul 22 10:56:59.557: INFO: got data: {
  "image": "kitten.jpg"
}

Jul 22 10:56:59.557: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jul 22 10:56:59.557: INFO: update-demo-kitten-sx2vx is verified up and running
Jul 22 10:56:59.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tzcj7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lrsdb'
Jul 22 10:56:59.661: INFO: stderr: ""
Jul 22 10:56:59.661: INFO: stdout: "true"
Jul 22 10:56:59.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tzcj7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-lrsdb'
Jul 22 10:56:59.764: INFO: stderr: ""
Jul 22 10:56:59.764: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jul 22 10:56:59.764: INFO: validating pod update-demo-kitten-tzcj7
Jul 22 10:56:59.768: INFO: got data: {
  "image": "kitten.jpg"
}

Jul 22 10:56:59.768: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jul 22 10:56:59.768: INFO: update-demo-kitten-tzcj7 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 10:56:59.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-lrsdb" for this suite.
Jul 22 10:57:23.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 10:57:23.818: INFO: namespace: e2e-tests-kubectl-lrsdb, resource: bindings, ignored listing per whitelist
Jul 22 10:57:23.863: INFO: namespace e2e-tests-kubectl-lrsdb deletion completed in 24.091444727s

• [SLOW TEST:54.426 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 10:57:23.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jul 22 10:57:24.031: INFO: namespace e2e-tests-kubectl-k29hk
Jul 22 10:57:24.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-k29hk'
Jul 22 10:57:24.271: INFO: stderr: ""
Jul 22 10:57:24.271: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jul 22 10:57:25.275: INFO: Selector matched 1 pods for map[app:redis]
Jul 22 10:57:25.275: INFO: Found 0 / 1
Jul 22 10:57:26.275: INFO: Selector matched 1 pods for map[app:redis]
Jul 22 10:57:26.275: INFO: Found 0 / 1
Jul 22 10:57:27.276: INFO: Selector matched 1 pods for map[app:redis]
Jul 22 10:57:27.276: INFO: Found 0 / 1
Jul 22 10:57:28.276: INFO: Selector matched 1 pods for map[app:redis]
Jul 22 10:57:28.276: INFO: Found 1 / 1
Jul 22 10:57:28.276: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul 22 10:57:28.279: INFO: Selector matched 1 pods for map[app:redis]
Jul 22 10:57:28.279: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul 22 10:57:28.279: INFO: wait on redis-master startup in e2e-tests-kubectl-k29hk 
Jul 22 10:57:28.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rl6fh redis-master --namespace=e2e-tests-kubectl-k29hk'
Jul 22 10:57:28.401: INFO: stderr: ""
Jul 22 10:57:28.401: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 22 Jul 10:57:27.029 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Jul 10:57:27.029 # Server started, Redis version 3.2.12\n1:M 22 Jul 10:57:27.029 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Jul 10:57:27.029 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jul 22 10:57:28.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-k29hk'
Jul 22 10:57:28.540: INFO: stderr: ""
Jul 22 10:57:28.541: INFO: stdout: "service/rm2 exposed\n"
Jul 22 10:57:28.546: INFO: Service rm2 in namespace e2e-tests-kubectl-k29hk found.
STEP: exposing service
Jul 22 10:57:30.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-k29hk'
Jul 22 10:57:30.691: INFO: stderr: ""
Jul 22 10:57:30.691: INFO: stdout: "service/rm3 exposed\n"
Jul 22 10:57:30.724: INFO: Service rm3 in namespace e2e-tests-kubectl-k29hk found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 10:57:32.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-k29hk" for this suite.
Jul 22 10:58:02.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 10:58:02.817: INFO: namespace: e2e-tests-kubectl-k29hk, resource: bindings, ignored listing per whitelist
Jul 22 10:58:02.876: INFO: namespace e2e-tests-kubectl-k29hk deletion completed in 30.141482282s

• [SLOW TEST:39.012 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 10:58:02.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-36d1c973-cc0a-11ea-aa05-0242ac11000b
STEP: Creating secret with name s-test-opt-upd-36d1c9dc-cc0a-11ea-aa05-0242ac11000b
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-36d1c973-cc0a-11ea-aa05-0242ac11000b
STEP: Updating secret s-test-opt-upd-36d1c9dc-cc0a-11ea-aa05-0242ac11000b
STEP: Creating secret with name s-test-opt-create-36d1ca01-cc0a-11ea-aa05-0242ac11000b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 10:59:42.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-4m2p5" for this suite.
Jul 22 11:00:05.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:00:05.140: INFO: namespace: e2e-tests-secrets-4m2p5, resource: bindings, ignored listing per whitelist
Jul 22 11:00:05.199: INFO: namespace e2e-tests-secrets-4m2p5 deletion completed in 22.237835037s

• [SLOW TEST:122.323 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:00:05.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 22 11:00:05.295: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7fbac0a9-cc0a-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-bfm59" to be "success or failure"
Jul 22 11:00:05.374: INFO: Pod "downwardapi-volume-7fbac0a9-cc0a-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 79.028848ms
Jul 22 11:00:07.541: INFO: Pod "downwardapi-volume-7fbac0a9-cc0a-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.24628443s
Jul 22 11:00:09.961: INFO: Pod "downwardapi-volume-7fbac0a9-cc0a-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.665875937s
Jul 22 11:00:11.964: INFO: Pod "downwardapi-volume-7fbac0a9-cc0a-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.668834724s
STEP: Saw pod success
Jul 22 11:00:11.964: INFO: Pod "downwardapi-volume-7fbac0a9-cc0a-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:00:11.966: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-7fbac0a9-cc0a-11ea-aa05-0242ac11000b container client-container: 
STEP: delete the pod
Jul 22 11:00:12.002: INFO: Waiting for pod downwardapi-volume-7fbac0a9-cc0a-11ea-aa05-0242ac11000b to disappear
Jul 22 11:00:12.032: INFO: Pod downwardapi-volume-7fbac0a9-cc0a-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:00:12.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-bfm59" for this suite.
Jul 22 11:00:22.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:00:22.161: INFO: namespace: e2e-tests-projected-bfm59, resource: bindings, ignored listing per whitelist
Jul 22 11:00:22.179: INFO: namespace e2e-tests-projected-bfm59 deletion completed in 10.142887702s

• [SLOW TEST:16.980 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:00:22.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-8a560df8-cc0a-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 22 11:00:23.177: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8a5b8f7c-cc0a-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-v7hj9" to be "success or failure"
Jul 22 11:00:23.255: INFO: Pod "pod-projected-configmaps-8a5b8f7c-cc0a-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 77.771063ms
Jul 22 11:00:25.259: INFO: Pod "pod-projected-configmaps-8a5b8f7c-cc0a-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082102483s
Jul 22 11:00:27.262: INFO: Pod "pod-projected-configmaps-8a5b8f7c-cc0a-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085359066s
STEP: Saw pod success
Jul 22 11:00:27.262: INFO: Pod "pod-projected-configmaps-8a5b8f7c-cc0a-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:00:27.265: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-8a5b8f7c-cc0a-11ea-aa05-0242ac11000b container projected-configmap-volume-test: 
STEP: delete the pod
Jul 22 11:00:27.658: INFO: Waiting for pod pod-projected-configmaps-8a5b8f7c-cc0a-11ea-aa05-0242ac11000b to disappear
Jul 22 11:00:27.829: INFO: Pod pod-projected-configmaps-8a5b8f7c-cc0a-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:00:27.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v7hj9" for this suite.
Jul 22 11:00:33.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:00:33.932: INFO: namespace: e2e-tests-projected-v7hj9, resource: bindings, ignored listing per whitelist
Jul 22 11:00:33.949: INFO: namespace e2e-tests-projected-v7hj9 deletion completed in 6.114570774s

• [SLOW TEST:11.771 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:00:33.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 22 11:00:34.090: INFO: (0) /api/v1/nodes/hunter-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052
STEP: creating the pod
Jul 22 11:00:40.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-t8fz6'
Jul 22 11:00:43.025: INFO: stderr: ""
Jul 22 11:00:43.025: INFO: stdout: "pod/pause created\n"
Jul 22 11:00:43.025: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jul 22 11:00:43.025: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-t8fz6" to be "running and ready"
Jul 22 11:00:43.055: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 30.006551ms
Jul 22 11:00:45.207: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.182141303s
Jul 22 11:00:47.211: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.186029568s
Jul 22 11:00:47.211: INFO: Pod "pause" satisfied condition "running and ready"
Jul 22 11:00:47.211: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: adding the label testing-label with value testing-label-value to a pod
Jul 22 11:00:47.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-t8fz6'
Jul 22 11:00:47.346: INFO: stderr: ""
Jul 22 11:00:47.346: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jul 22 11:00:47.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-t8fz6'
Jul 22 11:00:47.506: INFO: stderr: ""
Jul 22 11:00:47.506: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jul 22 11:00:47.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=e2e-tests-kubectl-t8fz6'
Jul 22 11:00:47.617: INFO: stderr: ""
Jul 22 11:00:47.617: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jul 22 11:00:47.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=e2e-tests-kubectl-t8fz6'
Jul 22 11:00:47.731: INFO: stderr: ""
Jul 22 11:00:47.731: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059
STEP: using delete to clean up resources
Jul 22 11:00:47.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-t8fz6'
Jul 22 11:00:47.885: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 22 11:00:47.885: INFO: stdout: "pod \"pause\" force deleted\n"
Jul 22 11:00:47.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-t8fz6'
Jul 22 11:00:48.124: INFO: stderr: "No resources found.\n"
Jul 22 11:00:48.124: INFO: stdout: ""
Jul 22 11:00:48.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=e2e-tests-kubectl-t8fz6 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 22 11:00:48.220: INFO: stderr: ""
Jul 22 11:00:48.220: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:00:48.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-t8fz6" for this suite.
Jul 22 11:00:54.250: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:00:54.259: INFO: namespace: e2e-tests-kubectl-t8fz6, resource: bindings, ignored listing per whitelist
Jul 22 11:00:54.327: INFO: namespace e2e-tests-kubectl-t8fz6 deletion completed in 6.103568297s

• [SLOW TEST:14.033 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:00:54.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 22 11:00:54.451: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d06b71f-cc0a-11ea-aa05-0242ac11000b" in namespace "e2e-tests-downward-api-wh777" to be "success or failure"
Jul 22 11:00:54.462: INFO: Pod "downwardapi-volume-9d06b71f-cc0a-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.161343ms
Jul 22 11:00:56.466: INFO: Pod "downwardapi-volume-9d06b71f-cc0a-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01424498s
Jul 22 11:00:58.469: INFO: Pod "downwardapi-volume-9d06b71f-cc0a-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017414558s
STEP: Saw pod success
Jul 22 11:00:58.469: INFO: Pod "downwardapi-volume-9d06b71f-cc0a-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:00:58.471: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-9d06b71f-cc0a-11ea-aa05-0242ac11000b container client-container: 
STEP: delete the pod
Jul 22 11:00:58.513: INFO: Waiting for pod downwardapi-volume-9d06b71f-cc0a-11ea-aa05-0242ac11000b to disappear
Jul 22 11:00:58.626: INFO: Pod downwardapi-volume-9d06b71f-cc0a-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:00:58.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-wh777" for this suite.
Jul 22 11:01:04.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:01:04.668: INFO: namespace: e2e-tests-downward-api-wh777, resource: bindings, ignored listing per whitelist
Jul 22 11:01:04.721: INFO: namespace e2e-tests-downward-api-wh777 deletion completed in 6.091694222s

• [SLOW TEST:10.394 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:01:04.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-a3358d85-cc0a-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 22 11:01:04.877: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a33772f4-cc0a-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-8mkl7" to be "success or failure"
Jul 22 11:01:04.880: INFO: Pod "pod-projected-configmaps-a33772f4-cc0a-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.880682ms
Jul 22 11:01:06.885: INFO: Pod "pod-projected-configmaps-a33772f4-cc0a-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007341342s
Jul 22 11:01:08.888: INFO: Pod "pod-projected-configmaps-a33772f4-cc0a-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010910422s
STEP: Saw pod success
Jul 22 11:01:08.888: INFO: Pod "pod-projected-configmaps-a33772f4-cc0a-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:01:08.891: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-a33772f4-cc0a-11ea-aa05-0242ac11000b container projected-configmap-volume-test: 
STEP: delete the pod
Jul 22 11:01:09.116: INFO: Waiting for pod pod-projected-configmaps-a33772f4-cc0a-11ea-aa05-0242ac11000b to disappear
Jul 22 11:01:09.225: INFO: Pod pod-projected-configmaps-a33772f4-cc0a-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:01:09.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8mkl7" for this suite.
Jul 22 11:01:15.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:01:15.428: INFO: namespace: e2e-tests-projected-8mkl7, resource: bindings, ignored listing per whitelist
Jul 22 11:01:15.432: INFO: namespace e2e-tests-projected-8mkl7 deletion completed in 6.204276543s

• [SLOW TEST:10.711 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:01:15.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-a9963fd6-cc0a-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume secrets
Jul 22 11:01:15.548: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a99a0268-cc0a-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-gwln5" to be "success or failure"
Jul 22 11:01:15.552: INFO: Pod "pod-projected-secrets-a99a0268-cc0a-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.888986ms
Jul 22 11:01:17.557: INFO: Pod "pod-projected-secrets-a99a0268-cc0a-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009484186s
Jul 22 11:01:19.561: INFO: Pod "pod-projected-secrets-a99a0268-cc0a-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013524127s
STEP: Saw pod success
Jul 22 11:01:19.561: INFO: Pod "pod-projected-secrets-a99a0268-cc0a-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:01:19.564: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-a99a0268-cc0a-11ea-aa05-0242ac11000b container projected-secret-volume-test: 
STEP: delete the pod
Jul 22 11:01:19.583: INFO: Waiting for pod pod-projected-secrets-a99a0268-cc0a-11ea-aa05-0242ac11000b to disappear
Jul 22 11:01:19.599: INFO: Pod pod-projected-secrets-a99a0268-cc0a-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:01:19.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gwln5" for this suite.
Jul 22 11:01:25.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:01:25.665: INFO: namespace: e2e-tests-projected-gwln5, resource: bindings, ignored listing per whitelist
Jul 22 11:01:25.692: INFO: namespace e2e-tests-projected-gwln5 deletion completed in 6.08983153s

• [SLOW TEST:10.259 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:01:25.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jul 22 11:01:40.907: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:01:41.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replicaset-x5ks9" for this suite.
Jul 22 11:02:04.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:02:04.131: INFO: namespace: e2e-tests-replicaset-x5ks9, resource: bindings, ignored listing per whitelist
Jul 22 11:02:04.133: INFO: namespace e2e-tests-replicaset-x5ks9 deletion completed in 22.177051111s

• [SLOW TEST:38.441 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:02:04.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 22 11:02:04.298: INFO: Waiting up to 5m0s for pod "pod-c6a75c01-cc0a-11ea-aa05-0242ac11000b" in namespace "e2e-tests-emptydir-82lj4" to be "success or failure"
Jul 22 11:02:04.314: INFO: Pod "pod-c6a75c01-cc0a-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.31095ms
Jul 22 11:02:06.710: INFO: Pod "pod-c6a75c01-cc0a-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.411794411s
Jul 22 11:02:08.714: INFO: Pod "pod-c6a75c01-cc0a-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.415879714s
STEP: Saw pod success
Jul 22 11:02:08.714: INFO: Pod "pod-c6a75c01-cc0a-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:02:08.717: INFO: Trying to get logs from node hunter-worker2 pod pod-c6a75c01-cc0a-11ea-aa05-0242ac11000b container test-container: 
STEP: delete the pod
Jul 22 11:02:08.938: INFO: Waiting for pod pod-c6a75c01-cc0a-11ea-aa05-0242ac11000b to disappear
Jul 22 11:02:08.949: INFO: Pod pod-c6a75c01-cc0a-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:02:08.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-82lj4" for this suite.
Jul 22 11:02:14.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:02:15.063: INFO: namespace: e2e-tests-emptydir-82lj4, resource: bindings, ignored listing per whitelist
Jul 22 11:02:15.096: INFO: namespace e2e-tests-emptydir-82lj4 deletion completed in 6.144050544s

• [SLOW TEST:10.963 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:02:15.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jul 22 11:02:23.286: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 22 11:02:23.307: INFO: Pod pod-with-prestop-http-hook still exists
Jul 22 11:02:25.308: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 22 11:02:25.333: INFO: Pod pod-with-prestop-http-hook still exists
Jul 22 11:02:27.308: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 22 11:02:27.314: INFO: Pod pod-with-prestop-http-hook still exists
Jul 22 11:02:29.308: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 22 11:02:29.312: INFO: Pod pod-with-prestop-http-hook still exists
Jul 22 11:02:31.308: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 22 11:02:31.312: INFO: Pod pod-with-prestop-http-hook still exists
Jul 22 11:02:33.308: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 22 11:02:33.370: INFO: Pod pod-with-prestop-http-hook still exists
Jul 22 11:02:35.308: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 22 11:02:35.312: INFO: Pod pod-with-prestop-http-hook still exists
Jul 22 11:02:37.308: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 22 11:02:37.328: INFO: Pod pod-with-prestop-http-hook still exists
Jul 22 11:02:39.308: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul 22 11:02:39.312: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:02:39.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-pkj6l" for this suite.
Jul 22 11:03:01.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:03:01.396: INFO: namespace: e2e-tests-container-lifecycle-hook-pkj6l, resource: bindings, ignored listing per whitelist
Jul 22 11:03:01.441: INFO: namespace e2e-tests-container-lifecycle-hook-pkj6l deletion completed in 22.117146134s

• [SLOW TEST:46.345 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:03:01.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override all
Jul 22 11:03:01.570: INFO: Waiting up to 5m0s for pod "client-containers-e8cac56c-cc0a-11ea-aa05-0242ac11000b" in namespace "e2e-tests-containers-79jvg" to be "success or failure"
Jul 22 11:03:01.575: INFO: Pod "client-containers-e8cac56c-cc0a-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.955956ms
Jul 22 11:03:03.634: INFO: Pod "client-containers-e8cac56c-cc0a-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063824396s
Jul 22 11:03:05.652: INFO: Pod "client-containers-e8cac56c-cc0a-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081901991s
Jul 22 11:03:07.655: INFO: Pod "client-containers-e8cac56c-cc0a-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.085171877s
STEP: Saw pod success
Jul 22 11:03:07.655: INFO: Pod "client-containers-e8cac56c-cc0a-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:03:07.657: INFO: Trying to get logs from node hunter-worker pod client-containers-e8cac56c-cc0a-11ea-aa05-0242ac11000b container test-container: 
STEP: delete the pod
Jul 22 11:03:07.682: INFO: Waiting for pod client-containers-e8cac56c-cc0a-11ea-aa05-0242ac11000b to disappear
Jul 22 11:03:07.686: INFO: Pod client-containers-e8cac56c-cc0a-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:03:07.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-79jvg" for this suite.
Jul 22 11:03:13.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:03:13.761: INFO: namespace: e2e-tests-containers-79jvg, resource: bindings, ignored listing per whitelist
Jul 22 11:03:13.794: INFO: namespace e2e-tests-containers-79jvg deletion completed in 6.104227265s

• [SLOW TEST:12.353 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:03:13.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 22 11:03:13.923: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f027bcf8-cc0a-11ea-aa05-0242ac11000b" in namespace "e2e-tests-downward-api-q79z4" to be "success or failure"
Jul 22 11:03:13.945: INFO: Pod "downwardapi-volume-f027bcf8-cc0a-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.908102ms
Jul 22 11:03:15.949: INFO: Pod "downwardapi-volume-f027bcf8-cc0a-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025994127s
Jul 22 11:03:17.953: INFO: Pod "downwardapi-volume-f027bcf8-cc0a-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029764415s
STEP: Saw pod success
Jul 22 11:03:17.953: INFO: Pod "downwardapi-volume-f027bcf8-cc0a-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:03:17.956: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-f027bcf8-cc0a-11ea-aa05-0242ac11000b container client-container: 
STEP: delete the pod
Jul 22 11:03:17.976: INFO: Waiting for pod downwardapi-volume-f027bcf8-cc0a-11ea-aa05-0242ac11000b to disappear
Jul 22 11:03:18.035: INFO: Pod downwardapi-volume-f027bcf8-cc0a-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:03:18.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-q79z4" for this suite.
Jul 22 11:03:24.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:03:24.093: INFO: namespace: e2e-tests-downward-api-q79z4, resource: bindings, ignored listing per whitelist
Jul 22 11:03:24.128: INFO: namespace e2e-tests-downward-api-q79z4 deletion completed in 6.089097609s

• [SLOW TEST:10.334 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:03:24.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 22 11:03:24.285: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jul 22 11:03:24.299: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:24.301: INFO: Number of nodes with available pods: 0
Jul 22 11:03:24.301: INFO: Node hunter-worker is running more than one daemon pod
Jul 22 11:03:25.306: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:25.309: INFO: Number of nodes with available pods: 0
Jul 22 11:03:25.309: INFO: Node hunter-worker is running more than one daemon pod
Jul 22 11:03:26.305: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:26.307: INFO: Number of nodes with available pods: 0
Jul 22 11:03:26.307: INFO: Node hunter-worker is running more than one daemon pod
Jul 22 11:03:27.463: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:27.634: INFO: Number of nodes with available pods: 0
Jul 22 11:03:27.634: INFO: Node hunter-worker is running more than one daemon pod
Jul 22 11:03:28.329: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:28.382: INFO: Number of nodes with available pods: 1
Jul 22 11:03:28.382: INFO: Node hunter-worker is running more than one daemon pod
Jul 22 11:03:29.307: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:29.311: INFO: Number of nodes with available pods: 2
Jul 22 11:03:29.311: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jul 22 11:03:29.405: INFO: Wrong image for pod: daemon-set-8r5wp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:29.405: INFO: Wrong image for pod: daemon-set-k9b4q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:29.413: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:30.418: INFO: Wrong image for pod: daemon-set-8r5wp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:30.418: INFO: Wrong image for pod: daemon-set-k9b4q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:30.422: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:31.418: INFO: Wrong image for pod: daemon-set-8r5wp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:31.418: INFO: Wrong image for pod: daemon-set-k9b4q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:31.422: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:32.418: INFO: Wrong image for pod: daemon-set-8r5wp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:32.418: INFO: Wrong image for pod: daemon-set-k9b4q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:32.423: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:33.473: INFO: Wrong image for pod: daemon-set-8r5wp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:33.473: INFO: Wrong image for pod: daemon-set-k9b4q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:33.473: INFO: Pod daemon-set-k9b4q is not available
Jul 22 11:03:33.478: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:34.418: INFO: Wrong image for pod: daemon-set-8r5wp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:34.418: INFO: Wrong image for pod: daemon-set-k9b4q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:34.418: INFO: Pod daemon-set-k9b4q is not available
Jul 22 11:03:34.423: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:35.418: INFO: Wrong image for pod: daemon-set-8r5wp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:35.418: INFO: Wrong image for pod: daemon-set-k9b4q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:35.418: INFO: Pod daemon-set-k9b4q is not available
Jul 22 11:03:35.422: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:36.418: INFO: Wrong image for pod: daemon-set-8r5wp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:36.418: INFO: Wrong image for pod: daemon-set-k9b4q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:36.418: INFO: Pod daemon-set-k9b4q is not available
Jul 22 11:03:36.422: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:37.418: INFO: Wrong image for pod: daemon-set-8r5wp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:37.418: INFO: Wrong image for pod: daemon-set-k9b4q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:37.418: INFO: Pod daemon-set-k9b4q is not available
Jul 22 11:03:37.421: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:38.418: INFO: Pod daemon-set-756ms is not available
Jul 22 11:03:38.418: INFO: Wrong image for pod: daemon-set-8r5wp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:38.423: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:39.418: INFO: Pod daemon-set-756ms is not available
Jul 22 11:03:39.418: INFO: Wrong image for pod: daemon-set-8r5wp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:39.421: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:40.418: INFO: Pod daemon-set-756ms is not available
Jul 22 11:03:40.418: INFO: Wrong image for pod: daemon-set-8r5wp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:40.422: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:41.431: INFO: Pod daemon-set-756ms is not available
Jul 22 11:03:41.431: INFO: Wrong image for pod: daemon-set-8r5wp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:41.434: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:42.418: INFO: Wrong image for pod: daemon-set-8r5wp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:42.421: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:43.417: INFO: Wrong image for pod: daemon-set-8r5wp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:43.422: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:44.417: INFO: Wrong image for pod: daemon-set-8r5wp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:44.417: INFO: Pod daemon-set-8r5wp is not available
Jul 22 11:03:44.420: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:45.418: INFO: Wrong image for pod: daemon-set-8r5wp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:45.418: INFO: Pod daemon-set-8r5wp is not available
Jul 22 11:03:45.422: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:46.419: INFO: Wrong image for pod: daemon-set-8r5wp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:46.419: INFO: Pod daemon-set-8r5wp is not available
Jul 22 11:03:46.423: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:47.416: INFO: Wrong image for pod: daemon-set-8r5wp. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul 22 11:03:47.416: INFO: Pod daemon-set-8r5wp is not available
Jul 22 11:03:47.426: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:48.418: INFO: Pod daemon-set-4x5f9 is not available
Jul 22 11:03:48.422: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Jul 22 11:03:48.426: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:48.429: INFO: Number of nodes with available pods: 1
Jul 22 11:03:48.429: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 22 11:03:49.434: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:49.437: INFO: Number of nodes with available pods: 1
Jul 22 11:03:49.437: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 22 11:03:50.433: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:50.436: INFO: Number of nodes with available pods: 1
Jul 22 11:03:50.436: INFO: Node hunter-worker2 is running more than one daemon pod
Jul 22 11:03:51.435: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:03:51.438: INFO: Number of nodes with available pods: 2
Jul 22 11:03:51.438: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-g459p, will wait for the garbage collector to delete the pods
Jul 22 11:03:51.511: INFO: Deleting DaemonSet.extensions daemon-set took: 5.263173ms
Jul 22 11:03:51.611: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.208624ms
Jul 22 11:03:57.514: INFO: Number of nodes with available pods: 0
Jul 22 11:03:57.514: INFO: Number of running nodes: 0, number of available pods: 0
Jul 22 11:03:57.516: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-g459p/daemonsets","resourceVersion":"2173496"},"items":null}

Jul 22 11:03:57.518: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-g459p/pods","resourceVersion":"2173496"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:03:57.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-g459p" for this suite.
Jul 22 11:04:03.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:04:03.577: INFO: namespace: e2e-tests-daemonsets-g459p, resource: bindings, ignored listing per whitelist
Jul 22 11:04:03.627: INFO: namespace e2e-tests-daemonsets-g459p deletion completed in 6.097124237s

• [SLOW TEST:39.498 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:04:03.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 22 11:04:03.769: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0dd85625-cc0b-11ea-aa05-0242ac11000b" in namespace "e2e-tests-downward-api-k4pw4" to be "success or failure"
Jul 22 11:04:03.805: INFO: Pod "downwardapi-volume-0dd85625-cc0b-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 35.17533ms
Jul 22 11:04:05.809: INFO: Pod "downwardapi-volume-0dd85625-cc0b-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039606742s
Jul 22 11:04:07.994: INFO: Pod "downwardapi-volume-0dd85625-cc0b-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224632831s
Jul 22 11:04:09.998: INFO: Pod "downwardapi-volume-0dd85625-cc0b-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.228810843s
STEP: Saw pod success
Jul 22 11:04:09.998: INFO: Pod "downwardapi-volume-0dd85625-cc0b-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:04:10.001: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-0dd85625-cc0b-11ea-aa05-0242ac11000b container client-container: 
STEP: delete the pod
Jul 22 11:04:10.073: INFO: Waiting for pod downwardapi-volume-0dd85625-cc0b-11ea-aa05-0242ac11000b to disappear
Jul 22 11:04:10.085: INFO: Pod downwardapi-volume-0dd85625-cc0b-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:04:10.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-k4pw4" for this suite.
Jul 22 11:04:16.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:04:16.215: INFO: namespace: e2e-tests-downward-api-k4pw4, resource: bindings, ignored listing per whitelist
Jul 22 11:04:16.219: INFO: namespace e2e-tests-downward-api-k4pw4 deletion completed in 6.126407649s

• [SLOW TEST:12.592 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:04:16.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-155bd56a-cc0b-11ea-aa05-0242ac11000b
STEP: Creating configMap with name cm-test-opt-upd-155bd5c3-cc0b-11ea-aa05-0242ac11000b
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-155bd56a-cc0b-11ea-aa05-0242ac11000b
STEP: Updating configmap cm-test-opt-upd-155bd5c3-cc0b-11ea-aa05-0242ac11000b
STEP: Creating configMap with name cm-test-opt-create-155bd5e5-cc0b-11ea-aa05-0242ac11000b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:05:48.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-pgtrx" for this suite.
Jul 22 11:06:10.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:06:10.804: INFO: namespace: e2e-tests-projected-pgtrx, resource: bindings, ignored listing per whitelist
Jul 22 11:06:10.929: INFO: namespace e2e-tests-projected-pgtrx deletion completed in 22.156036089s

• [SLOW TEST:114.710 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:06:10.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating Redis RC
Jul 22 11:06:11.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-xdrkq'
Jul 22 11:06:11.314: INFO: stderr: ""
Jul 22 11:06:11.314: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jul 22 11:06:12.317: INFO: Selector matched 1 pods for map[app:redis]
Jul 22 11:06:12.317: INFO: Found 0 / 1
Jul 22 11:06:13.361: INFO: Selector matched 1 pods for map[app:redis]
Jul 22 11:06:13.362: INFO: Found 0 / 1
Jul 22 11:06:14.340: INFO: Selector matched 1 pods for map[app:redis]
Jul 22 11:06:14.340: INFO: Found 0 / 1
Jul 22 11:06:15.319: INFO: Selector matched 1 pods for map[app:redis]
Jul 22 11:06:15.319: INFO: Found 1 / 1
Jul 22 11:06:15.319: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jul 22 11:06:15.322: INFO: Selector matched 1 pods for map[app:redis]
Jul 22 11:06:15.322: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul 22 11:06:15.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-9d556 --namespace=e2e-tests-kubectl-xdrkq -p {"metadata":{"annotations":{"x":"y"}}}'
Jul 22 11:06:15.462: INFO: stderr: ""
Jul 22 11:06:15.462: INFO: stdout: "pod/redis-master-9d556 patched\n"
STEP: checking annotations
Jul 22 11:06:15.502: INFO: Selector matched 1 pods for map[app:redis]
Jul 22 11:06:15.502: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:06:15.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-xdrkq" for this suite.
Jul 22 11:06:39.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:06:39.579: INFO: namespace: e2e-tests-kubectl-xdrkq, resource: bindings, ignored listing per whitelist
Jul 22 11:06:39.617: INFO: namespace e2e-tests-kubectl-xdrkq deletion completed in 24.11052847s

• [SLOW TEST:28.687 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:06:39.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: executing a command with run --rm and attach with stdin
Jul 22 11:06:39.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=e2e-tests-kubectl-h76mm run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jul 22 11:06:43.431: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0722 11:06:43.371515     859 log.go:172] (0xc00014c6e0) (0xc0008ce140) Create stream\nI0722 11:06:43.371582     859 log.go:172] (0xc00014c6e0) (0xc0008ce140) Stream added, broadcasting: 1\nI0722 11:06:43.373853     859 log.go:172] (0xc00014c6e0) Reply frame received for 1\nI0722 11:06:43.373891     859 log.go:172] (0xc00014c6e0) (0xc00091c000) Create stream\nI0722 11:06:43.373904     859 log.go:172] (0xc00014c6e0) (0xc00091c000) Stream added, broadcasting: 3\nI0722 11:06:43.374811     859 log.go:172] (0xc00014c6e0) Reply frame received for 3\nI0722 11:06:43.374841     859 log.go:172] (0xc00014c6e0) (0xc00091c0a0) Create stream\nI0722 11:06:43.374850     859 log.go:172] (0xc00014c6e0) (0xc00091c0a0) Stream added, broadcasting: 5\nI0722 11:06:43.375703     859 log.go:172] (0xc00014c6e0) Reply frame received for 5\nI0722 11:06:43.375733     859 log.go:172] (0xc00014c6e0) (0xc0005934a0) Create stream\nI0722 11:06:43.375742     859 log.go:172] (0xc00014c6e0) (0xc0005934a0) Stream added, broadcasting: 7\nI0722 11:06:43.376539     859 log.go:172] (0xc00014c6e0) Reply frame received for 7\nI0722 11:06:43.376697     859 log.go:172] (0xc00091c000) (3) Writing data frame\nI0722 11:06:43.376917     859 log.go:172] (0xc00091c000) (3) Writing data frame\nI0722 11:06:43.377874     859 log.go:172] (0xc00014c6e0) Data frame received for 5\nI0722 11:06:43.377897     859 log.go:172] (0xc00091c0a0) (5) Data frame handling\nI0722 11:06:43.377913     859 log.go:172] (0xc00091c0a0) (5) Data frame sent\nI0722 11:06:43.378628     859 log.go:172] (0xc00014c6e0) Data frame received for 5\nI0722 11:06:43.378642     859 log.go:172] (0xc00091c0a0) (5) Data frame handling\nI0722 11:06:43.378653     859 log.go:172] (0xc00091c0a0) (5) Data frame sent\nI0722 11:06:43.410584     859 log.go:172] (0xc00014c6e0) Data frame received for 7\nI0722 11:06:43.410623     859 log.go:172] (0xc0005934a0) (7) Data frame handling\nI0722 11:06:43.410653     859 log.go:172] (0xc00014c6e0) Data frame received for 5\nI0722 11:06:43.410666     859 log.go:172] (0xc00091c0a0) (5) Data frame handling\nI0722 11:06:43.410920     859 log.go:172] (0xc00014c6e0) Data frame received for 1\nI0722 11:06:43.410946     859 log.go:172] (0xc0008ce140) (1) Data frame handling\nI0722 11:06:43.410963     859 log.go:172] (0xc0008ce140) (1) Data frame sent\nI0722 11:06:43.410987     859 log.go:172] (0xc00014c6e0) (0xc00091c000) Stream removed, broadcasting: 3\nI0722 11:06:43.411036     859 log.go:172] (0xc00014c6e0) (0xc0008ce140) Stream removed, broadcasting: 1\nI0722 11:06:43.411066     859 log.go:172] (0xc00014c6e0) Go away received\nI0722 11:06:43.411161     859 log.go:172] (0xc00014c6e0) (0xc0008ce140) Stream removed, broadcasting: 1\nI0722 11:06:43.411179     859 log.go:172] (0xc00014c6e0) (0xc00091c000) Stream removed, broadcasting: 3\nI0722 11:06:43.411187     859 log.go:172] (0xc00014c6e0) (0xc00091c0a0) Stream removed, broadcasting: 5\nI0722 11:06:43.411195     859 log.go:172] (0xc00014c6e0) (0xc0005934a0) Stream removed, broadcasting: 7\n"
Jul 22 11:06:43.431: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:06:45.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-h76mm" for this suite.
Jul 22 11:06:51.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:06:51.519: INFO: namespace: e2e-tests-kubectl-h76mm, resource: bindings, ignored listing per whitelist
Jul 22 11:06:51.536: INFO: namespace e2e-tests-kubectl-h76mm deletion completed in 6.094036709s

• [SLOW TEST:11.919 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:06:51.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jul 22 11:06:51.661: INFO: Waiting up to 5m0s for pod "downward-api-71f0fb19-cc0b-11ea-aa05-0242ac11000b" in namespace "e2e-tests-downward-api-w5jxb" to be "success or failure"
Jul 22 11:06:51.698: INFO: Pod "downward-api-71f0fb19-cc0b-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 36.938508ms
Jul 22 11:06:53.701: INFO: Pod "downward-api-71f0fb19-cc0b-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040356269s
Jul 22 11:06:55.705: INFO: Pod "downward-api-71f0fb19-cc0b-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044257766s
STEP: Saw pod success
Jul 22 11:06:55.705: INFO: Pod "downward-api-71f0fb19-cc0b-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:06:55.708: INFO: Trying to get logs from node hunter-worker2 pod downward-api-71f0fb19-cc0b-11ea-aa05-0242ac11000b container dapi-container: 
STEP: delete the pod
Jul 22 11:06:55.745: INFO: Waiting for pod downward-api-71f0fb19-cc0b-11ea-aa05-0242ac11000b to disappear
Jul 22 11:06:55.755: INFO: Pod downward-api-71f0fb19-cc0b-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:06:55.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-w5jxb" for this suite.
Jul 22 11:07:01.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:07:01.811: INFO: namespace: e2e-tests-downward-api-w5jxb, resource: bindings, ignored listing per whitelist
Jul 22 11:07:01.956: INFO: namespace e2e-tests-downward-api-w5jxb deletion completed in 6.197329014s

• [SLOW TEST:10.420 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:07:01.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:07:06.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-2m9v2" for this suite.
Jul 22 11:07:12.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:07:12.826: INFO: namespace: e2e-tests-kubelet-test-2m9v2, resource: bindings, ignored listing per whitelist
Jul 22 11:07:12.864: INFO: namespace e2e-tests-kubelet-test-2m9v2 deletion completed in 6.344714158s

• [SLOW TEST:10.908 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:07:12.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Creating an uninitialized pod in the namespace
Jul 22 11:07:25.528: INFO: error from create uninitialized namespace: 
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:08:03.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-6hzrq" for this suite.
Jul 22 11:08:09.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:08:09.163: INFO: namespace: e2e-tests-namespaces-6hzrq, resource: bindings, ignored listing per whitelist
Jul 22 11:08:09.214: INFO: namespace e2e-tests-namespaces-6hzrq deletion completed in 6.1257095s
STEP: Destroying namespace "e2e-tests-nsdeletetest-h4lgq" for this suite.
Jul 22 11:08:09.217: INFO: Namespace e2e-tests-nsdeletetest-h4lgq was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-nktgt" for this suite.
Jul 22 11:08:15.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:08:15.269: INFO: namespace: e2e-tests-nsdeletetest-nktgt, resource: bindings, ignored listing per whitelist
Jul 22 11:08:15.386: INFO: namespace e2e-tests-nsdeletetest-nktgt deletion completed in 6.168555625s

• [SLOW TEST:62.521 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:08:15.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jul 22 11:08:19.822: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-a3f4d72b-cc0b-11ea-aa05-0242ac11000b,GenerateName:,Namespace:e2e-tests-events-wk8vf,SelfLink:/api/v1/namespaces/e2e-tests-events-wk8vf/pods/send-events-a3f4d72b-cc0b-11ea-aa05-0242ac11000b,UID:a3feb22f-cc0b-11ea-b2c9-0242ac120008,ResourceVersion:2174526,Generation:0,CreationTimestamp:2020-07-22 11:08:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 566013614,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jrvpl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jrvpl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-jrvpl true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001e156c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001e156e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:08:15 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:08:19 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:08:19 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:08:15 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.19,StartTime:2020-07-22 11:08:15 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-07-22 11:08:18 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://d29f0f1e6d9ac0d849da1218c47ec983773aae332b530bcd717228b6492325ce}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jul 22 11:08:21.827: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jul 22 11:08:23.872: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:08:24.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-events-wk8vf" for this suite.
Jul 22 11:09:10.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:09:10.194: INFO: namespace: e2e-tests-events-wk8vf, resource: bindings, ignored listing per whitelist
Jul 22 11:09:10.257: INFO: namespace e2e-tests-events-wk8vf deletion completed in 46.08671604s

• [SLOW TEST:54.871 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:09:10.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 22 11:09:10.648: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4bf2cc1-cc0b-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-qmx9q" to be "success or failure"
Jul 22 11:09:10.655: INFO: Pod "downwardapi-volume-c4bf2cc1-cc0b-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.837448ms
Jul 22 11:09:12.660: INFO: Pod "downwardapi-volume-c4bf2cc1-cc0b-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012354694s
Jul 22 11:09:14.664: INFO: Pod "downwardapi-volume-c4bf2cc1-cc0b-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016251914s
STEP: Saw pod success
Jul 22 11:09:14.664: INFO: Pod "downwardapi-volume-c4bf2cc1-cc0b-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:09:14.666: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-c4bf2cc1-cc0b-11ea-aa05-0242ac11000b container client-container: 
STEP: delete the pod
Jul 22 11:09:14.702: INFO: Waiting for pod downwardapi-volume-c4bf2cc1-cc0b-11ea-aa05-0242ac11000b to disappear
Jul 22 11:09:14.716: INFO: Pod downwardapi-volume-c4bf2cc1-cc0b-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:09:14.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-qmx9q" for this suite.
Jul 22 11:09:20.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:09:20.795: INFO: namespace: e2e-tests-projected-qmx9q, resource: bindings, ignored listing per whitelist
Jul 22 11:09:20.858: INFO: namespace e2e-tests-projected-qmx9q deletion completed in 6.136850181s

• [SLOW TEST:10.600 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:09:20.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-dn9l5
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-dn9l5
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-dn9l5
Jul 22 11:09:20.992: INFO: Found 0 stateful pods, waiting for 1
Jul 22 11:09:30.996: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jul 22 11:09:30.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dn9l5 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 22 11:09:31.499: INFO: stderr: "I0722 11:09:31.125076     885 log.go:172] (0xc00084e2c0) (0xc000756640) Create stream\nI0722 11:09:31.125134     885 log.go:172] (0xc00084e2c0) (0xc000756640) Stream added, broadcasting: 1\nI0722 11:09:31.127720     885 log.go:172] (0xc00084e2c0) Reply frame received for 1\nI0722 11:09:31.127762     885 log.go:172] (0xc00084e2c0) (0xc000656e60) Create stream\nI0722 11:09:31.127774     885 log.go:172] (0xc00084e2c0) (0xc000656e60) Stream added, broadcasting: 3\nI0722 11:09:31.128706     885 log.go:172] (0xc00084e2c0) Reply frame received for 3\nI0722 11:09:31.128855     885 log.go:172] (0xc00084e2c0) (0xc0007566e0) Create stream\nI0722 11:09:31.128873     885 log.go:172] (0xc00084e2c0) (0xc0007566e0) Stream added, broadcasting: 5\nI0722 11:09:31.129862     885 log.go:172] (0xc00084e2c0) Reply frame received for 5\nI0722 11:09:31.492571     885 log.go:172] (0xc00084e2c0) Data frame received for 5\nI0722 11:09:31.492759     885 log.go:172] (0xc0007566e0) (5) Data frame handling\nI0722 11:09:31.492862     885 log.go:172] (0xc00084e2c0) Data frame received for 3\nI0722 11:09:31.492913     885 log.go:172] (0xc000656e60) (3) Data frame handling\nI0722 11:09:31.492940     885 log.go:172] (0xc000656e60) (3) Data frame sent\nI0722 11:09:31.492960     885 log.go:172] (0xc00084e2c0) Data frame received for 3\nI0722 11:09:31.492970     885 log.go:172] (0xc000656e60) (3) Data frame handling\nI0722 11:09:31.495508     885 log.go:172] (0xc00084e2c0) Data frame received for 1\nI0722 11:09:31.495529     885 log.go:172] (0xc000756640) (1) Data frame handling\nI0722 11:09:31.495542     885 log.go:172] (0xc000756640) (1) Data frame sent\nI0722 11:09:31.495555     885 log.go:172] (0xc00084e2c0) (0xc000756640) Stream removed, broadcasting: 1\nI0722 11:09:31.495677     885 log.go:172] (0xc00084e2c0) Go away received\nI0722 11:09:31.495713     885 log.go:172] (0xc00084e2c0) (0xc000756640) Stream removed, broadcasting: 1\nI0722 11:09:31.495731     885 log.go:172] (0xc00084e2c0) (0xc000656e60) Stream removed, broadcasting: 3\nI0722 11:09:31.495739     885 log.go:172] (0xc00084e2c0) (0xc0007566e0) Stream removed, broadcasting: 5\n"
Jul 22 11:09:31.499: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 22 11:09:31.499: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 22 11:09:31.503: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul 22 11:09:41.506: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 22 11:09:41.506: INFO: Waiting for statefulset status.replicas updated to 0
Jul 22 11:09:41.526: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999669s
Jul 22 11:09:42.532: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.991699857s
Jul 22 11:09:43.536: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.986075259s
Jul 22 11:09:44.541: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.981287007s
Jul 22 11:09:45.665: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.976705681s
Jul 22 11:09:46.669: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.853046466s
Jul 22 11:09:47.674: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.848697047s
Jul 22 11:09:48.678: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.843372185s
Jul 22 11:09:49.683: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.83965826s
Jul 22 11:09:50.687: INFO: Verifying statefulset ss doesn't scale past 1 for another 834.485414ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-dn9l5
Jul 22 11:09:51.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dn9l5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 22 11:09:51.888: INFO: stderr: "I0722 11:09:51.816263     908 log.go:172] (0xc00016a840) (0xc000768640) Create stream\nI0722 11:09:51.816327     908 log.go:172] (0xc00016a840) (0xc000768640) Stream added, broadcasting: 1\nI0722 11:09:51.819269     908 log.go:172] (0xc00016a840) Reply frame received for 1\nI0722 11:09:51.819318     908 log.go:172] (0xc00016a840) (0xc000656d20) Create stream\nI0722 11:09:51.819334     908 log.go:172] (0xc00016a840) (0xc000656d20) Stream added, broadcasting: 3\nI0722 11:09:51.820319     908 log.go:172] (0xc00016a840) Reply frame received for 3\nI0722 11:09:51.820373     908 log.go:172] (0xc00016a840) (0xc0007686e0) Create stream\nI0722 11:09:51.820392     908 log.go:172] (0xc00016a840) (0xc0007686e0) Stream added, broadcasting: 5\nI0722 11:09:51.821592     908 log.go:172] (0xc00016a840) Reply frame received for 5\nI0722 11:09:51.881261     908 log.go:172] (0xc00016a840) Data frame received for 5\nI0722 11:09:51.881307     908 log.go:172] (0xc0007686e0) (5) Data frame handling\nI0722 11:09:51.881342     908 log.go:172] (0xc00016a840) Data frame received for 3\nI0722 11:09:51.881361     908 log.go:172] (0xc000656d20) (3) Data frame handling\nI0722 11:09:51.881383     908 log.go:172] (0xc000656d20) (3) Data frame sent\nI0722 11:09:51.881413     908 log.go:172] (0xc00016a840) Data frame received for 3\nI0722 11:09:51.881429     908 log.go:172] (0xc000656d20) (3) Data frame handling\nI0722 11:09:51.883233     908 log.go:172] (0xc00016a840) Data frame received for 1\nI0722 11:09:51.883256     908 log.go:172] (0xc000768640) (1) Data frame handling\nI0722 11:09:51.883278     908 log.go:172] (0xc000768640) (1) Data frame sent\nI0722 11:09:51.883292     908 log.go:172] (0xc00016a840) (0xc000768640) Stream removed, broadcasting: 1\nI0722 11:09:51.883369     908 log.go:172] (0xc00016a840) Go away received\nI0722 11:09:51.883478     908 log.go:172] (0xc00016a840) (0xc000768640) Stream removed, broadcasting: 1\nI0722 11:09:51.883516     908 log.go:172] (0xc00016a840) (0xc000656d20) Stream removed, broadcasting: 3\nI0722 11:09:51.883529     908 log.go:172] (0xc00016a840) (0xc0007686e0) Stream removed, broadcasting: 5\n"
Jul 22 11:09:51.888: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 22 11:09:51.888: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 22 11:09:51.892: INFO: Found 1 stateful pods, waiting for 3
Jul 22 11:10:01.895: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 22 11:10:01.895: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 22 11:10:01.895: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jul 22 11:10:01.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dn9l5 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 22 11:10:02.117: INFO: stderr: "I0722 11:10:02.043824     929 log.go:172] (0xc0006d0210) (0xc0005e5400) Create stream\nI0722 11:10:02.043898     929 log.go:172] (0xc0006d0210) (0xc0005e5400) Stream added, broadcasting: 1\nI0722 11:10:02.046156     929 log.go:172] (0xc0006d0210) Reply frame received for 1\nI0722 11:10:02.046199     929 log.go:172] (0xc0006d0210) (0xc000694000) Create stream\nI0722 11:10:02.046208     929 log.go:172] (0xc0006d0210) (0xc000694000) Stream added, broadcasting: 3\nI0722 11:10:02.047392     929 log.go:172] (0xc0006d0210) Reply frame received for 3\nI0722 11:10:02.047442     929 log.go:172] (0xc0006d0210) (0xc0005e54a0) Create stream\nI0722 11:10:02.047463     929 log.go:172] (0xc0006d0210) (0xc0005e54a0) Stream added, broadcasting: 5\nI0722 11:10:02.048407     929 log.go:172] (0xc0006d0210) Reply frame received for 5\nI0722 11:10:02.112248     929 log.go:172] (0xc0006d0210) Data frame received for 5\nI0722 11:10:02.112295     929 log.go:172] (0xc0005e54a0) (5) Data frame handling\nI0722 11:10:02.112321     929 log.go:172] (0xc0006d0210) Data frame received for 3\nI0722 11:10:02.112330     929 log.go:172] (0xc000694000) (3) Data frame handling\nI0722 11:10:02.112345     929 log.go:172] (0xc000694000) (3) Data frame sent\nI0722 11:10:02.112427     929 log.go:172] (0xc0006d0210) Data frame received for 3\nI0722 11:10:02.112465     929 log.go:172] (0xc000694000) (3) Data frame handling\nI0722 11:10:02.114472     929 log.go:172] (0xc0006d0210) Data frame received for 1\nI0722 11:10:02.114489     929 log.go:172] (0xc0005e5400) (1) Data frame handling\nI0722 11:10:02.114499     929 log.go:172] (0xc0005e5400) (1) Data frame sent\nI0722 11:10:02.114929     929 log.go:172] (0xc0006d0210) (0xc0005e5400) Stream removed, broadcasting: 1\nI0722 11:10:02.115099     929 log.go:172] (0xc0006d0210) Go away received\nI0722 11:10:02.115147     929 log.go:172] (0xc0006d0210) (0xc0005e5400) Stream removed, broadcasting: 1\nI0722 11:10:02.115172     929 log.go:172] (0xc0006d0210) (0xc000694000) Stream removed, broadcasting: 3\nI0722 11:10:02.115185     929 log.go:172] (0xc0006d0210) (0xc0005e54a0) Stream removed, broadcasting: 5\n"
Jul 22 11:10:02.117: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 22 11:10:02.117: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 22 11:10:02.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dn9l5 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 22 11:10:02.361: INFO: stderr: "I0722 11:10:02.236871     951 log.go:172] (0xc00015c6e0) (0xc00078e640) Create stream\nI0722 11:10:02.236928     951 log.go:172] (0xc00015c6e0) (0xc00078e640) Stream added, broadcasting: 1\nI0722 11:10:02.244652     951 log.go:172] (0xc00015c6e0) Reply frame received for 1\nI0722 11:10:02.244837     951 log.go:172] (0xc00015c6e0) (0xc00078e6e0) Create stream\nI0722 11:10:02.244866     951 log.go:172] (0xc00015c6e0) (0xc00078e6e0) Stream added, broadcasting: 3\nI0722 11:10:02.245834     951 log.go:172] (0xc00015c6e0) Reply frame received for 3\nI0722 11:10:02.245875     951 log.go:172] (0xc00015c6e0) (0xc000664d20) Create stream\nI0722 11:10:02.245888     951 log.go:172] (0xc00015c6e0) (0xc000664d20) Stream added, broadcasting: 5\nI0722 11:10:02.246728     951 log.go:172] (0xc00015c6e0) Reply frame received for 5\nI0722 11:10:02.356310     951 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0722 11:10:02.356351     951 log.go:172] (0xc00078e6e0) (3) Data frame handling\nI0722 11:10:02.356377     951 log.go:172] (0xc00078e6e0) (3) Data frame sent\nI0722 11:10:02.356486     951 log.go:172] (0xc00015c6e0) Data frame received for 5\nI0722 11:10:02.356503     951 log.go:172] (0xc000664d20) (5) Data frame handling\nI0722 11:10:02.356646     951 log.go:172] (0xc00015c6e0) Data frame received for 3\nI0722 11:10:02.356671     951 log.go:172] (0xc00078e6e0) (3) Data frame handling\nI0722 11:10:02.358340     951 log.go:172] (0xc00015c6e0) Data frame received for 1\nI0722 11:10:02.358356     951 log.go:172] (0xc00078e640) (1) Data frame handling\nI0722 11:10:02.358365     951 log.go:172] (0xc00078e640) (1) Data frame sent\nI0722 11:10:02.358375     951 log.go:172] (0xc00015c6e0) (0xc00078e640) Stream removed, broadcasting: 1\nI0722 11:10:02.358453     951 log.go:172] (0xc00015c6e0) Go away received\nI0722 11:10:02.358507     951 log.go:172] (0xc00015c6e0) (0xc00078e640) Stream removed, broadcasting: 1\nI0722 11:10:02.358520     951 log.go:172] (0xc00015c6e0) (0xc00078e6e0) Stream removed, broadcasting: 3\nI0722 11:10:02.358528     951 log.go:172] (0xc00015c6e0) (0xc000664d20) Stream removed, broadcasting: 5\n"
Jul 22 11:10:02.361: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 22 11:10:02.361: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 22 11:10:02.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dn9l5 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 22 11:10:02.607: INFO: stderr: "I0722 11:10:02.506000     975 log.go:172] (0xc000138630) (0xc00066f2c0) Create stream\nI0722 11:10:02.506046     975 log.go:172] (0xc000138630) (0xc00066f2c0) Stream added, broadcasting: 1\nI0722 11:10:02.507628     975 log.go:172] (0xc000138630) Reply frame received for 1\nI0722 11:10:02.507676     975 log.go:172] (0xc000138630) (0xc0005d6000) Create stream\nI0722 11:10:02.507699     975 log.go:172] (0xc000138630) (0xc0005d6000) Stream added, broadcasting: 3\nI0722 11:10:02.508411     975 log.go:172] (0xc000138630) Reply frame received for 3\nI0722 11:10:02.508436     975 log.go:172] (0xc000138630) (0xc00059e000) Create stream\nI0722 11:10:02.508444     975 log.go:172] (0xc000138630) (0xc00059e000) Stream added, broadcasting: 5\nI0722 11:10:02.509161     975 log.go:172] (0xc000138630) Reply frame received for 5\nI0722 11:10:02.601467     975 log.go:172] (0xc000138630) Data frame received for 3\nI0722 11:10:02.601514     975 log.go:172] (0xc0005d6000) (3) Data frame handling\nI0722 11:10:02.601529     975 log.go:172] (0xc0005d6000) (3) Data frame sent\nI0722 11:10:02.601547     975 log.go:172] (0xc000138630) Data frame received for 3\nI0722 11:10:02.601563     975 log.go:172] (0xc0005d6000) (3) Data frame handling\nI0722 11:10:02.601608     975 log.go:172] (0xc000138630) Data frame received for 5\nI0722 11:10:02.601628     975 log.go:172] (0xc00059e000) (5) Data frame handling\nI0722 11:10:02.603417     975 log.go:172] (0xc000138630) Data frame received for 1\nI0722 11:10:02.603441     975 log.go:172] (0xc00066f2c0) (1) Data frame handling\nI0722 11:10:02.603454     975 log.go:172] (0xc00066f2c0) (1) Data frame sent\nI0722 11:10:02.603465     975 log.go:172] (0xc000138630) (0xc00066f2c0) Stream removed, broadcasting: 1\nI0722 11:10:02.603477     975 log.go:172] (0xc000138630) Go away received\nI0722 11:10:02.603670     975 log.go:172] (0xc000138630) (0xc00066f2c0) Stream removed, broadcasting: 1\nI0722 11:10:02.603694     975 log.go:172] (0xc000138630) (0xc0005d6000) Stream removed, broadcasting: 3\nI0722 11:10:02.603713     975 log.go:172] (0xc000138630) (0xc00059e000) Stream removed, broadcasting: 5\n"
Jul 22 11:10:02.607: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 22 11:10:02.607: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 22 11:10:02.607: INFO: Waiting for statefulset status.replicas updated to 0
Jul 22 11:10:02.611: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jul 22 11:10:12.731: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 22 11:10:12.731: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul 22 11:10:12.731: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul 22 11:10:13.013: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999495s
Jul 22 11:10:14.019: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.957935256s
Jul 22 11:10:15.024: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.952557649s
Jul 22 11:10:16.029: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.947150559s
Jul 22 11:10:17.132: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.942000232s
Jul 22 11:10:18.136: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.839332537s
Jul 22 11:10:19.140: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.834935164s
Jul 22 11:10:20.353: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.830930881s
Jul 22 11:10:21.358: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.617909935s
Jul 22 11:10:22.369: INFO: Verifying statefulset ss doesn't scale past 3 for another 613.608557ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-dn9l5
Jul 22 11:10:23.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dn9l5 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 22 11:10:23.777: INFO: stderr: "I0722 11:10:23.679723     997 log.go:172] (0xc00014e840) (0xc0007dd540) Create stream\nI0722 11:10:23.679774     997 log.go:172] (0xc00014e840) (0xc0007dd540) Stream added, broadcasting: 1\nI0722 11:10:23.681836     997 log.go:172] (0xc00014e840) Reply frame received for 1\nI0722 11:10:23.681889     997 log.go:172] (0xc00014e840) (0xc0007ca000) Create stream\nI0722 11:10:23.681914     997 log.go:172] (0xc00014e840) (0xc0007ca000) Stream added, broadcasting: 3\nI0722 11:10:23.682984     997 log.go:172] (0xc00014e840) Reply frame received for 3\nI0722 11:10:23.683023     997 log.go:172] (0xc00014e840) (0xc0007dd5e0) Create stream\nI0722 11:10:23.683039     997 log.go:172] (0xc00014e840) (0xc0007dd5e0) Stream added, broadcasting: 5\nI0722 11:10:23.684016     997 log.go:172] (0xc00014e840) Reply frame received for 5\nI0722 11:10:23.771750     997 log.go:172] (0xc00014e840) Data frame received for 3\nI0722 11:10:23.771781     997 log.go:172] (0xc0007ca000) (3) Data frame handling\nI0722 11:10:23.771790     997 log.go:172] (0xc0007ca000) (3) Data frame sent\nI0722 11:10:23.771798     997 log.go:172] (0xc00014e840) Data frame received for 3\nI0722 11:10:23.771820     997 log.go:172] (0xc00014e840) Data frame received for 5\nI0722 11:10:23.771852     997 log.go:172] (0xc0007dd5e0) (5) Data frame handling\nI0722 11:10:23.771880     997 log.go:172] (0xc0007ca000) (3) Data frame handling\nI0722 11:10:23.773425     997 log.go:172] (0xc00014e840) Data frame received for 1\nI0722 11:10:23.773446     997 log.go:172] (0xc0007dd540) (1) Data frame handling\nI0722 11:10:23.773460     997 log.go:172] (0xc0007dd540) (1) Data frame sent\nI0722 11:10:23.773506     997 log.go:172] (0xc00014e840) (0xc0007dd540) Stream removed, broadcasting: 1\nI0722 11:10:23.773557     997 log.go:172] (0xc00014e840) Go away received\nI0722 11:10:23.773637     997 log.go:172] (0xc00014e840) (0xc0007dd540) Stream removed, broadcasting: 1\nI0722 11:10:23.773653     997 log.go:172] (0xc00014e840) (0xc0007ca000) Stream removed, broadcasting: 3\nI0722 11:10:23.773662     997 log.go:172] (0xc00014e840) (0xc0007dd5e0) Stream removed, broadcasting: 5\n"
Jul 22 11:10:23.777: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 22 11:10:23.777: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 22 11:10:23.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dn9l5 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 22 11:10:23.996: INFO: stderr: "I0722 11:10:23.919379    1019 log.go:172] (0xc0007a42c0) (0xc000858640) Create stream\nI0722 11:10:23.919448    1019 log.go:172] (0xc0007a42c0) (0xc000858640) Stream added, broadcasting: 1\nI0722 11:10:23.922315    1019 log.go:172] (0xc0007a42c0) Reply frame received for 1\nI0722 11:10:23.922402    1019 log.go:172] (0xc0007a42c0) (0xc000600000) Create stream\nI0722 11:10:23.922433    1019 log.go:172] (0xc0007a42c0) (0xc000600000) Stream added, broadcasting: 3\nI0722 11:10:23.923390    1019 log.go:172] (0xc0007a42c0) Reply frame received for 3\nI0722 11:10:23.923447    1019 log.go:172] (0xc0007a42c0) (0xc0002ace60) Create stream\nI0722 11:10:23.923476    1019 log.go:172] (0xc0007a42c0) (0xc0002ace60) Stream added, broadcasting: 5\nI0722 11:10:23.924470    1019 log.go:172] (0xc0007a42c0) Reply frame received for 5\nI0722 11:10:23.990194    1019 log.go:172] (0xc0007a42c0) Data frame received for 5\nI0722 11:10:23.990226    1019 log.go:172] (0xc0002ace60) (5) Data frame handling\nI0722 11:10:23.990264    1019 log.go:172] (0xc0007a42c0) Data frame received for 3\nI0722 11:10:23.990315    1019 log.go:172] (0xc000600000) (3) Data frame handling\nI0722 11:10:23.990351    1019 log.go:172] (0xc000600000) (3) Data frame sent\nI0722 11:10:23.990379    1019 log.go:172] (0xc0007a42c0) Data frame received for 3\nI0722 11:10:23.990400    1019 log.go:172] (0xc000600000) (3) Data frame handling\nI0722 11:10:23.992062    1019 log.go:172] (0xc0007a42c0) Data frame received for 1\nI0722 11:10:23.992098    1019 log.go:172] (0xc000858640) (1) Data frame handling\nI0722 11:10:23.992122    1019 log.go:172] (0xc000858640) (1) Data frame sent\nI0722 11:10:23.992145    1019 log.go:172] (0xc0007a42c0) (0xc000858640) Stream removed, broadcasting: 1\nI0722 11:10:23.992182    1019 log.go:172] (0xc0007a42c0) Go away received\nI0722 11:10:23.992415    1019 log.go:172] (0xc0007a42c0) (0xc000858640) Stream removed, broadcasting: 1\nI0722 11:10:23.992433    1019 log.go:172] (0xc0007a42c0) (0xc000600000) Stream removed, broadcasting: 3\nI0722 11:10:23.992444    1019 log.go:172] (0xc0007a42c0) (0xc0002ace60) Stream removed, broadcasting: 5\n"
Jul 22 11:10:23.996: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 22 11:10:23.996: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 22 11:10:23.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-dn9l5 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 22 11:10:24.226: INFO: stderr: "I0722 11:10:24.147824    1040 log.go:172] (0xc0006fc370) (0xc00078f540) Create stream\nI0722 11:10:24.147882    1040 log.go:172] (0xc0006fc370) (0xc00078f540) Stream added, broadcasting: 1\nI0722 11:10:24.149751    1040 log.go:172] (0xc0006fc370) Reply frame received for 1\nI0722 11:10:24.149797    1040 log.go:172] (0xc0006fc370) (0xc0007da000) Create stream\nI0722 11:10:24.149807    1040 log.go:172] (0xc0006fc370) (0xc0007da000) Stream added, broadcasting: 3\nI0722 11:10:24.150604    1040 log.go:172] (0xc0006fc370) Reply frame received for 3\nI0722 11:10:24.150634    1040 log.go:172] (0xc0006fc370) (0xc00078f5e0) Create stream\nI0722 11:10:24.150644    1040 log.go:172] (0xc0006fc370) (0xc00078f5e0) Stream added, broadcasting: 5\nI0722 11:10:24.151280    1040 log.go:172] (0xc0006fc370) Reply frame received for 5\nI0722 11:10:24.219582    1040 log.go:172] (0xc0006fc370) Data frame received for 5\nI0722 11:10:24.219624    1040 log.go:172] (0xc00078f5e0) (5) Data frame handling\nI0722 11:10:24.219683    1040 log.go:172] (0xc0006fc370) Data frame received for 3\nI0722 11:10:24.219725    1040 log.go:172] (0xc0007da000) (3) Data frame handling\nI0722 11:10:24.219749    1040 log.go:172] (0xc0007da000) (3) Data frame sent\nI0722 11:10:24.219758    1040 log.go:172] (0xc0006fc370) Data frame received for 3\nI0722 11:10:24.219766    1040 log.go:172] (0xc0007da000) (3) Data frame handling\nI0722 11:10:24.221411    1040 log.go:172] (0xc0006fc370) Data frame received for 1\nI0722 11:10:24.221431    1040 log.go:172] (0xc00078f540) (1) Data frame handling\nI0722 11:10:24.221456    1040 log.go:172] (0xc00078f540) (1) Data frame sent\nI0722 11:10:24.221475    1040 log.go:172] (0xc0006fc370) (0xc00078f540) Stream removed, broadcasting: 1\nI0722 11:10:24.221505    1040 log.go:172] (0xc0006fc370) Go away received\nI0722 11:10:24.221817    1040 log.go:172] (0xc0006fc370) (0xc00078f540) Stream removed, broadcasting: 1\nI0722 11:10:24.221849    1040 log.go:172] (0xc0006fc370) (0xc0007da000) Stream removed, broadcasting: 3\nI0722 11:10:24.221869    1040 log.go:172] (0xc0006fc370) (0xc00078f5e0) Stream removed, broadcasting: 5\n"
Jul 22 11:10:24.226: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 22 11:10:24.226: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 22 11:10:24.226: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul 22 11:10:54.266: INFO: Deleting all statefulset in ns e2e-tests-statefulset-dn9l5
Jul 22 11:10:54.269: INFO: Scaling statefulset ss to 0
Jul 22 11:10:54.277: INFO: Waiting for statefulset status.replicas updated to 0
Jul 22 11:10:54.280: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:10:54.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-dn9l5" for this suite.
Jul 22 11:11:00.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:11:00.440: INFO: namespace: e2e-tests-statefulset-dn9l5, resource: bindings, ignored listing per whitelist
Jul 22 11:11:00.442: INFO: namespace e2e-tests-statefulset-dn9l5 deletion completed in 6.109092963s

• [SLOW TEST:99.584 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:11:00.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jul 22 11:11:00.575: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 22 11:11:00.582: INFO: Waiting for terminating namespaces to be deleted...
Jul 22 11:11:00.584: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Jul 22 11:11:00.589: INFO: kindnet-2w5m4 from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded)
Jul 22 11:11:00.589: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 22 11:11:00.589: INFO: kube-proxy-8wnps from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded)
Jul 22 11:11:00.589: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 22 11:11:00.589: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Jul 22 11:11:00.594: INFO: kube-proxy-b6f6s from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded)
Jul 22 11:11:00.594: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 22 11:11:00.594: INFO: kindnet-hpnvh from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded)
Jul 22 11:11:00.594: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: verifying the node has the label node hunter-worker
STEP: verifying the node has the label node hunter-worker2
Jul 22 11:11:00.694: INFO: Pod kindnet-2w5m4 requesting resource cpu=100m on Node hunter-worker
Jul 22 11:11:00.695: INFO: Pod kindnet-hpnvh requesting resource cpu=100m on Node hunter-worker2
Jul 22 11:11:00.695: INFO: Pod kube-proxy-8wnps requesting resource cpu=0m on Node hunter-worker
Jul 22 11:11:00.695: INFO: Pod kube-proxy-b6f6s requesting resource cpu=0m on Node hunter-worker2
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0661878a-cc0c-11ea-aa05-0242ac11000b.16240ea2e169cae3], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-77ff8/filler-pod-0661878a-cc0c-11ea-aa05-0242ac11000b to hunter-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0661878a-cc0c-11ea-aa05-0242ac11000b.16240ea37d7b30fe], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0661878a-cc0c-11ea-aa05-0242ac11000b.16240ea3e4648b19], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0661878a-cc0c-11ea-aa05-0242ac11000b.16240ea3f4e924d5], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-06627571-cc0c-11ea-aa05-0242ac11000b.16240ea2e31db691], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-77ff8/filler-pod-06627571-cc0c-11ea-aa05-0242ac11000b to hunter-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-06627571-cc0c-11ea-aa05-0242ac11000b.16240ea352c79b75], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-06627571-cc0c-11ea-aa05-0242ac11000b.16240ea3d73ead51], Reason = [Created], Message = [Created container]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-06627571-cc0c-11ea-aa05-0242ac11000b.16240ea3ef2a897e], Reason = [Started], Message = [Started container]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.16240ea449a7f455], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node hunter-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node hunter-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:11:08.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-77ff8" for this suite.
Jul 22 11:11:14.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:11:14.268: INFO: namespace: e2e-tests-sched-pred-77ff8, resource: bindings, ignored listing per whitelist
Jul 22 11:11:14.332: INFO: namespace e2e-tests-sched-pred-77ff8 deletion completed in 6.306450605s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:13.890 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:11:14.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 22 11:11:14.712: INFO: Creating deployment "test-recreate-deployment"
Jul 22 11:11:14.718: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jul 22 11:11:14.737: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
Jul 22 11:11:16.866: INFO: Waiting deployment "test-recreate-deployment" to complete
Jul 22 11:11:16.869: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731013074, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731013074, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731013075, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731013074, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 22 11:11:18.872: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731013074, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731013074, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731013075, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731013074, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5bf7f65dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 22 11:11:20.873: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jul 22 11:11:20.879: INFO: Updating deployment test-recreate-deployment
Jul 22 11:11:20.879: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul 22 11:11:21.538: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-z7v8p,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-z7v8p/deployments/test-recreate-deployment,UID:0ebc9b40-cc0c-11ea-b2c9-0242ac120008,ResourceVersion:2175194,Generation:2,CreationTimestamp:2020-07-22 11:11:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-07-22 11:11:21 +0000 UTC 2020-07-22 11:11:21 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-07-22 11:11:21 +0000 UTC 2020-07-22 11:11:14 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-589c4bfd" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jul 22 11:11:21.547: INFO: New ReplicaSet "test-recreate-deployment-589c4bfd" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd,GenerateName:,Namespace:e2e-tests-deployment-z7v8p,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-z7v8p/replicasets/test-recreate-deployment-589c4bfd,UID:128b7682-cc0c-11ea-b2c9-0242ac120008,ResourceVersion:2175192,Generation:1,CreationTimestamp:2020-07-22 11:11:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 0ebc9b40-cc0c-11ea-b2c9-0242ac120008 0xc0024b29ff 0xc0024b2a10}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul 22 11:11:21.547: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jul 22 11:11:21.547: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5bf7f65dc,GenerateName:,Namespace:e2e-tests-deployment-z7v8p,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-z7v8p/replicasets/test-recreate-deployment-5bf7f65dc,UID:0ec04524-cc0c-11ea-b2c9-0242ac120008,ResourceVersion:2175183,Generation:2,CreationTimestamp:2020-07-22 11:11:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 0ebc9b40-cc0c-11ea-b2c9-0242ac120008 0xc0024b2ad0 0xc0024b2ad1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5bf7f65dc,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul 22 11:11:21.549: INFO: Pod "test-recreate-deployment-589c4bfd-ct5pd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-589c4bfd-ct5pd,GenerateName:test-recreate-deployment-589c4bfd-,Namespace:e2e-tests-deployment-z7v8p,SelfLink:/api/v1/namespaces/e2e-tests-deployment-z7v8p/pods/test-recreate-deployment-589c4bfd-ct5pd,UID:1293d24b-cc0c-11ea-b2c9-0242ac120008,ResourceVersion:2175195,Generation:0,CreationTimestamp:2020-07-22 11:11:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 589c4bfd,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-589c4bfd 128b7682-cc0c-11ea-b2c9-0242ac120008 0xc00229d18f 0xc00229d1a0}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-lqq4c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lqq4c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lqq4c true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00229d210} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00229d230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:11:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:11:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:11:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:11:21 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-07-22 11:11:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:11:21.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-z7v8p" for this suite.
Jul 22 11:11:29.568: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:11:29.598: INFO: namespace: e2e-tests-deployment-z7v8p, resource: bindings, ignored listing per whitelist
Jul 22 11:11:29.649: INFO: namespace e2e-tests-deployment-z7v8p deletion completed in 8.096384886s

• [SLOW TEST:15.317 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:11:29.650: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's args
Jul 22 11:11:29.782: INFO: Waiting up to 5m0s for pod "var-expansion-17b57dcb-cc0c-11ea-aa05-0242ac11000b" in namespace "e2e-tests-var-expansion-xs7sv" to be "success or failure"
Jul 22 11:11:29.786: INFO: Pod "var-expansion-17b57dcb-cc0c-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.922364ms
Jul 22 11:11:31.790: INFO: Pod "var-expansion-17b57dcb-cc0c-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007406454s
Jul 22 11:11:33.794: INFO: Pod "var-expansion-17b57dcb-cc0c-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011912288s
STEP: Saw pod success
Jul 22 11:11:33.794: INFO: Pod "var-expansion-17b57dcb-cc0c-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:11:33.797: INFO: Trying to get logs from node hunter-worker pod var-expansion-17b57dcb-cc0c-11ea-aa05-0242ac11000b container dapi-container: 
STEP: delete the pod
Jul 22 11:11:33.852: INFO: Waiting for pod var-expansion-17b57dcb-cc0c-11ea-aa05-0242ac11000b to disappear
Jul 22 11:11:33.861: INFO: Pod var-expansion-17b57dcb-cc0c-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:11:33.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-xs7sv" for this suite.
Jul 22 11:11:39.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:11:40.061: INFO: namespace: e2e-tests-var-expansion-xs7sv, resource: bindings, ignored listing per whitelist
Jul 22 11:11:40.072: INFO: namespace e2e-tests-var-expansion-xs7sv deletion completed in 6.206921056s

• [SLOW TEST:10.422 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:11:40.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-map-1de7d7be-cc0c-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume secrets
Jul 22 11:11:40.299: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1df27948-cc0c-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-l8ntq" to be "success or failure"
Jul 22 11:11:40.305: INFO: Pod "pod-projected-secrets-1df27948-cc0c-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.882946ms
Jul 22 11:11:42.557: INFO: Pod "pod-projected-secrets-1df27948-cc0c-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.257681006s
Jul 22 11:11:44.561: INFO: Pod "pod-projected-secrets-1df27948-cc0c-11ea-aa05-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.261515547s
Jul 22 11:11:46.565: INFO: Pod "pod-projected-secrets-1df27948-cc0c-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.265659456s
STEP: Saw pod success
Jul 22 11:11:46.565: INFO: Pod "pod-projected-secrets-1df27948-cc0c-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:11:46.568: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-1df27948-cc0c-11ea-aa05-0242ac11000b container projected-secret-volume-test: 
STEP: delete the pod
Jul 22 11:11:46.591: INFO: Waiting for pod pod-projected-secrets-1df27948-cc0c-11ea-aa05-0242ac11000b to disappear
Jul 22 11:11:46.595: INFO: Pod pod-projected-secrets-1df27948-cc0c-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:11:46.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-l8ntq" for this suite.
Jul 22 11:11:52.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:11:52.710: INFO: namespace: e2e-tests-projected-l8ntq, resource: bindings, ignored listing per whitelist
Jul 22 11:11:52.715: INFO: namespace e2e-tests-projected-l8ntq deletion completed in 6.116300218s

• [SLOW TEST:12.643 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:11:52.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-25759107-cc0c-11ea-aa05-0242ac11000b
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-25759107-cc0c-11ea-aa05-0242ac11000b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:13:09.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-w5255" for this suite.
Jul 22 11:13:31.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:13:31.300: INFO: namespace: e2e-tests-configmap-w5255, resource: bindings, ignored listing per whitelist
Jul 22 11:13:31.317: INFO: namespace e2e-tests-configmap-w5255 deletion completed in 22.094513482s

• [SLOW TEST:98.601 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:13:31.317: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 22 11:13:51.462: INFO: Container started at 2020-07-22 11:13:34 +0000 UTC, pod became ready at 2020-07-22 11:13:51 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:13:51.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-h46x8" for this suite.
Jul 22 11:14:13.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:14:13.521: INFO: namespace: e2e-tests-container-probe-h46x8, resource: bindings, ignored listing per whitelist
Jul 22 11:14:13.561: INFO: namespace e2e-tests-container-probe-h46x8 deletion completed in 22.093782041s

• [SLOW TEST:42.244 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:14:13.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul 22 11:14:21.804: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul 22 11:14:21.842: INFO: Pod pod-with-poststart-http-hook still exists
Jul 22 11:14:23.842: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul 22 11:14:23.846: INFO: Pod pod-with-poststart-http-hook still exists
Jul 22 11:14:25.842: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul 22 11:14:25.846: INFO: Pod pod-with-poststart-http-hook still exists
Jul 22 11:14:27.842: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul 22 11:14:27.846: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:14:27.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-l7gj9" for this suite.
Jul 22 11:14:49.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:14:49.949: INFO: namespace: e2e-tests-container-lifecycle-hook-l7gj9, resource: bindings, ignored listing per whitelist
Jul 22 11:14:49.965: INFO: namespace e2e-tests-container-lifecycle-hook-l7gj9 deletion completed in 22.115200117s

• [SLOW TEST:36.404 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:14:49.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 22 11:14:50.135: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jul 22 11:14:55.139: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul 22 11:14:55.139: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul 22 11:14:55.156: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-6vvcp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6vvcp/deployments/test-cleanup-deployment,UID:921f8c6c-cc0c-11ea-b2c9-0242ac120008,ResourceVersion:2175795,Generation:1,CreationTimestamp:2020-07-22 11:14:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jul 22 11:14:55.162: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Jul 22 11:14:55.162: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jul 22 11:14:55.163: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:e2e-tests-deployment-6vvcp,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-6vvcp/replicasets/test-cleanup-controller,UID:8f1e321e-cc0c-11ea-b2c9-0242ac120008,ResourceVersion:2175796,Generation:1,CreationTimestamp:2020-07-22 11:14:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 921f8c6c-cc0c-11ea-b2c9-0242ac120008 0xc001a6c577 0xc001a6c578}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jul 22 11:14:55.214: INFO: Pod "test-cleanup-controller-8657t" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-8657t,GenerateName:test-cleanup-controller-,Namespace:e2e-tests-deployment-6vvcp,SelfLink:/api/v1/namespaces/e2e-tests-deployment-6vvcp/pods/test-cleanup-controller-8657t,UID:8f23efca-cc0c-11ea-b2c9-0242ac120008,ResourceVersion:2175790,Generation:0,CreationTimestamp:2020-07-22 11:14:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 8f1e321e-cc0c-11ea-b2c9-0242ac120008 0xc001c42027 0xc001c42028}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-mn4m6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mn4m6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-mn4m6 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001c420a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001c42140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:14:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:14:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:14:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:14:50 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.56,StartTime:2020-07-22 11:14:50 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-22 11:14:52 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1fc3c1b21b08da1fc2f6525de756d57905da88ed9d91ac453f3043a293b49967}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:14:55.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-6vvcp" for this suite.
Jul 22 11:15:01.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:15:01.305: INFO: namespace: e2e-tests-deployment-6vvcp, resource: bindings, ignored listing per whitelist
Jul 22 11:15:01.344: INFO: namespace e2e-tests-deployment-6vvcp deletion completed in 6.126231166s

• [SLOW TEST:11.378 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:15:01.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-llt28 in namespace e2e-tests-proxy-xqckz
I0722 11:15:01.488693       7 runners.go:184] Created replication controller with name: proxy-service-llt28, namespace: e2e-tests-proxy-xqckz, replica count: 1
I0722 11:15:02.539122       7 runners.go:184] proxy-service-llt28 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0722 11:15:03.539372       7 runners.go:184] proxy-service-llt28 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0722 11:15:04.539595       7 runners.go:184] proxy-service-llt28 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0722 11:15:05.539822       7 runners.go:184] proxy-service-llt28 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0722 11:15:06.540037       7 runners.go:184] proxy-service-llt28 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0722 11:15:07.540265       7 runners.go:184] proxy-service-llt28 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0722 11:15:08.540521       7 runners.go:184] proxy-service-llt28 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0722 11:15:09.540843       7 runners.go:184] proxy-service-llt28 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 22 11:15:09.544: INFO: setup took 8.130978483s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jul 22 11:15:09.552: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-xqckz/pods/proxy-service-llt28-wqk7q:162/proxy/: bar (200; 7.522118ms)
Jul 22 11:15:09.553: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-xqckz/services/http:proxy-service-llt28:portname2/proxy/: bar (200; 7.859635ms)
Jul 22 11:15:09.553: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-xqckz/pods/http:proxy-service-llt28-wqk7q:1080/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul 22 11:15:28.467: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a348a9bd-cc0c-11ea-aa05-0242ac11000b"
Jul 22 11:15:28.467: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a348a9bd-cc0c-11ea-aa05-0242ac11000b" in namespace "e2e-tests-pods-lkgph" to be "terminated due to deadline exceeded"
Jul 22 11:15:28.473: INFO: Pod "pod-update-activedeadlineseconds-a348a9bd-cc0c-11ea-aa05-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 5.519279ms
Jul 22 11:15:30.476: INFO: Pod "pod-update-activedeadlineseconds-a348a9bd-cc0c-11ea-aa05-0242ac11000b": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.009311968s
Jul 22 11:15:30.476: INFO: Pod "pod-update-activedeadlineseconds-a348a9bd-cc0c-11ea-aa05-0242ac11000b" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:15:30.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-lkgph" for this suite.
Jul 22 11:15:36.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:15:36.518: INFO: namespace: e2e-tests-pods-lkgph, resource: bindings, ignored listing per whitelist
Jul 22 11:15:36.569: INFO: namespace e2e-tests-pods-lkgph deletion completed in 6.089037202s

• [SLOW TEST:12.798 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:15:36.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0722 11:15:46.748809       7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 22 11:15:46.748: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:15:46.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-55tx2" for this suite.
Jul 22 11:15:52.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:15:52.838: INFO: namespace: e2e-tests-gc-55tx2, resource: bindings, ignored listing per whitelist
Jul 22 11:15:52.855: INFO: namespace e2e-tests-gc-55tx2 deletion completed in 6.103435746s

• [SLOW TEST:16.285 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:15:52.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service endpoint-test2 in namespace e2e-tests-services-9h6rr
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-9h6rr to expose endpoints map[]
Jul 22 11:15:53.078: INFO: Get endpoints failed (13.139727ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jul 22 11:15:54.082: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-9h6rr exposes endpoints map[] (1.017064777s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-9h6rr
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-9h6rr to expose endpoints map[pod1:[80]]
Jul 22 11:15:57.119: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-9h6rr exposes endpoints map[pod1:[80]] (3.030033577s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-9h6rr
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-9h6rr to expose endpoints map[pod1:[80] pod2:[80]]
Jul 22 11:16:00.280: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-9h6rr exposes endpoints map[pod1:[80] pod2:[80]] (3.157730028s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-9h6rr
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-9h6rr to expose endpoints map[pod2:[80]]
Jul 22 11:16:01.314: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-9h6rr exposes endpoints map[pod2:[80]] (1.030291895s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-9h6rr
STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-9h6rr to expose endpoints map[]
Jul 22 11:16:01.351: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-9h6rr exposes endpoints map[] (32.795566ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:16:01.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-9h6rr" for this suite.
Jul 22 11:16:23.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:16:23.505: INFO: namespace: e2e-tests-services-9h6rr, resource: bindings, ignored listing per whitelist
Jul 22 11:16:23.598: INFO: namespace e2e-tests-services-9h6rr deletion completed in 22.121413913s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:30.743 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:16:23.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jul 22 11:16:28.304: INFO: Successfully updated pod "labelsupdatec6ed60cc-cc0c-11ea-aa05-0242ac11000b"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:16:32.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-wfjxt" for this suite.
Jul 22 11:16:54.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:16:54.391: INFO: namespace: e2e-tests-projected-wfjxt, resource: bindings, ignored listing per whitelist
Jul 22 11:16:54.444: INFO: namespace e2e-tests-projected-wfjxt deletion completed in 22.094513845s

• [SLOW TEST:30.846 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:16:54.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:17:00.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-namespaces-7qpks" for this suite.
Jul 22 11:17:06.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:17:06.829: INFO: namespace: e2e-tests-namespaces-7qpks, resource: bindings, ignored listing per whitelist
Jul 22 11:17:06.867: INFO: namespace e2e-tests-namespaces-7qpks deletion completed in 6.092865416s
STEP: Destroying namespace "e2e-tests-nsdeletetest-s7tdb" for this suite.
Jul 22 11:17:06.869: INFO: Namespace e2e-tests-nsdeletetest-s7tdb was already deleted
STEP: Destroying namespace "e2e-tests-nsdeletetest-ftrw2" for this suite.
Jul 22 11:17:12.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:17:12.896: INFO: namespace: e2e-tests-nsdeletetest-ftrw2, resource: bindings, ignored listing per whitelist
Jul 22 11:17:12.959: INFO: namespace e2e-tests-nsdeletetest-ftrw2 deletion completed in 6.090109147s

• [SLOW TEST:18.515 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:17:12.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-secret-85vx
STEP: Creating a pod to test atomic-volume-subpath
Jul 22 11:17:13.159: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-85vx" in namespace "e2e-tests-subpath-wwzvg" to be "success or failure"
Jul 22 11:17:13.177: INFO: Pod "pod-subpath-test-secret-85vx": Phase="Pending", Reason="", readiness=false. Elapsed: 17.848278ms
Jul 22 11:17:15.377: INFO: Pod "pod-subpath-test-secret-85vx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217357173s
Jul 22 11:17:17.443: INFO: Pod "pod-subpath-test-secret-85vx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.283256862s
Jul 22 11:17:19.598: INFO: Pod "pod-subpath-test-secret-85vx": Phase="Running", Reason="", readiness=true. Elapsed: 6.439038884s
Jul 22 11:17:21.603: INFO: Pod "pod-subpath-test-secret-85vx": Phase="Running", Reason="", readiness=false. Elapsed: 8.443258109s
Jul 22 11:17:23.607: INFO: Pod "pod-subpath-test-secret-85vx": Phase="Running", Reason="", readiness=false. Elapsed: 10.447292883s
Jul 22 11:17:25.611: INFO: Pod "pod-subpath-test-secret-85vx": Phase="Running", Reason="", readiness=false. Elapsed: 12.451602371s
Jul 22 11:17:27.615: INFO: Pod "pod-subpath-test-secret-85vx": Phase="Running", Reason="", readiness=false. Elapsed: 14.456017586s
Jul 22 11:17:29.619: INFO: Pod "pod-subpath-test-secret-85vx": Phase="Running", Reason="", readiness=false. Elapsed: 16.459335194s
Jul 22 11:17:31.623: INFO: Pod "pod-subpath-test-secret-85vx": Phase="Running", Reason="", readiness=false. Elapsed: 18.463259203s
Jul 22 11:17:33.626: INFO: Pod "pod-subpath-test-secret-85vx": Phase="Running", Reason="", readiness=false. Elapsed: 20.466838119s
Jul 22 11:17:35.630: INFO: Pod "pod-subpath-test-secret-85vx": Phase="Running", Reason="", readiness=false. Elapsed: 22.471032933s
Jul 22 11:17:37.634: INFO: Pod "pod-subpath-test-secret-85vx": Phase="Running", Reason="", readiness=false. Elapsed: 24.474966682s
Jul 22 11:17:39.638: INFO: Pod "pod-subpath-test-secret-85vx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.478984513s
STEP: Saw pod success
Jul 22 11:17:39.638: INFO: Pod "pod-subpath-test-secret-85vx" satisfied condition "success or failure"
Jul 22 11:17:39.641: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-secret-85vx container test-container-subpath-secret-85vx: 
STEP: delete the pod
Jul 22 11:17:39.713: INFO: Waiting for pod pod-subpath-test-secret-85vx to disappear
Jul 22 11:17:39.760: INFO: Pod pod-subpath-test-secret-85vx no longer exists
STEP: Deleting pod pod-subpath-test-secret-85vx
Jul 22 11:17:39.760: INFO: Deleting pod "pod-subpath-test-secret-85vx" in namespace "e2e-tests-subpath-wwzvg"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:17:39.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-wwzvg" for this suite.
Jul 22 11:17:45.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:17:45.921: INFO: namespace: e2e-tests-subpath-wwzvg, resource: bindings, ignored listing per whitelist
Jul 22 11:17:45.983: INFO: namespace e2e-tests-subpath-wwzvg deletion completed in 6.135544176s

• [SLOW TEST:33.024 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:17:45.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-f7ffe8e8-cc0c-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 22 11:17:46.086: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f80192e3-cc0c-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-cchpg" to be "success or failure"
Jul 22 11:17:46.093: INFO: Pod "pod-projected-configmaps-f80192e3-cc0c-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093026ms
Jul 22 11:17:48.209: INFO: Pod "pod-projected-configmaps-f80192e3-cc0c-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122907177s
Jul 22 11:17:50.213: INFO: Pod "pod-projected-configmaps-f80192e3-cc0c-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.126903545s
STEP: Saw pod success
Jul 22 11:17:50.213: INFO: Pod "pod-projected-configmaps-f80192e3-cc0c-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:17:50.216: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-f80192e3-cc0c-11ea-aa05-0242ac11000b container projected-configmap-volume-test: 
STEP: delete the pod
Jul 22 11:17:50.237: INFO: Waiting for pod pod-projected-configmaps-f80192e3-cc0c-11ea-aa05-0242ac11000b to disappear
Jul 22 11:17:50.242: INFO: Pod pod-projected-configmaps-f80192e3-cc0c-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:17:50.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cchpg" for this suite.
Jul 22 11:17:56.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:17:56.293: INFO: namespace: e2e-tests-projected-cchpg, resource: bindings, ignored listing per whitelist
Jul 22 11:17:56.330: INFO: namespace e2e-tests-projected-cchpg deletion completed in 6.08597466s

• [SLOW TEST:10.347 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:17:56.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 22 11:17:56.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jul 22 11:17:56.631: INFO: stderr: ""
Jul 22 11:17:56.631: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-07-10T10:25:27Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:50:51Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:17:56.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-jqbbn" for this suite.
Jul 22 11:18:02.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:18:02.728: INFO: namespace: e2e-tests-kubectl-jqbbn, resource: bindings, ignored listing per whitelist
Jul 22 11:18:02.739: INFO: namespace e2e-tests-kubectl-jqbbn deletion completed in 6.103857477s

• [SLOW TEST:6.408 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:18:02.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul 22 11:18:02.843: INFO: Waiting up to 5m0s for pod "pod-01fd0afa-cc0d-11ea-aa05-0242ac11000b" in namespace "e2e-tests-emptydir-th4pv" to be "success or failure"
Jul 22 11:18:02.857: INFO: Pod "pod-01fd0afa-cc0d-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.461767ms
Jul 22 11:18:04.861: INFO: Pod "pod-01fd0afa-cc0d-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017605382s
Jul 22 11:18:06.865: INFO: Pod "pod-01fd0afa-cc0d-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021371583s
STEP: Saw pod success
Jul 22 11:18:06.865: INFO: Pod "pod-01fd0afa-cc0d-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:18:06.867: INFO: Trying to get logs from node hunter-worker2 pod pod-01fd0afa-cc0d-11ea-aa05-0242ac11000b container test-container: 
STEP: delete the pod
Jul 22 11:18:06.888: INFO: Waiting for pod pod-01fd0afa-cc0d-11ea-aa05-0242ac11000b to disappear
Jul 22 11:18:06.898: INFO: Pod pod-01fd0afa-cc0d-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:18:06.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-th4pv" for this suite.
Jul 22 11:18:12.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:18:12.932: INFO: namespace: e2e-tests-emptydir-th4pv, resource: bindings, ignored listing per whitelist
Jul 22 11:18:12.985: INFO: namespace e2e-tests-emptydir-th4pv deletion completed in 6.08346323s

• [SLOW TEST:10.245 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:18:12.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 22 11:18:13.168: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"081f30e1-cc0d-11ea-b2c9-0242ac120008", Controller:(*bool)(0xc001d76186), BlockOwnerDeletion:(*bool)(0xc001d76187)}}
Jul 22 11:18:13.223: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"081e2658-cc0d-11ea-b2c9-0242ac120008", Controller:(*bool)(0xc002402a92), BlockOwnerDeletion:(*bool)(0xc002402a93)}}
Jul 22 11:18:13.237: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"081eb09b-cc0d-11ea-b2c9-0242ac120008", Controller:(*bool)(0xc001d763fa), BlockOwnerDeletion:(*bool)(0xc001d763fb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:18:18.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-98pch" for this suite.
Jul 22 11:18:24.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:18:24.338: INFO: namespace: e2e-tests-gc-98pch, resource: bindings, ignored listing per whitelist
Jul 22 11:18:24.380: INFO: namespace e2e-tests-gc-98pch deletion completed in 6.081805932s

• [SLOW TEST:11.395 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:18:24.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-downwardapi-wh6m
STEP: Creating a pod to test atomic-volume-subpath
Jul 22 11:18:24.521: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-wh6m" in namespace "e2e-tests-subpath-kbxxt" to be "success or failure"
Jul 22 11:18:24.560: INFO: Pod "pod-subpath-test-downwardapi-wh6m": Phase="Pending", Reason="", readiness=false. Elapsed: 38.592378ms
Jul 22 11:18:26.564: INFO: Pod "pod-subpath-test-downwardapi-wh6m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042224837s
Jul 22 11:18:28.568: INFO: Pod "pod-subpath-test-downwardapi-wh6m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046714864s
Jul 22 11:18:30.575: INFO: Pod "pod-subpath-test-downwardapi-wh6m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053680499s
Jul 22 11:18:32.579: INFO: Pod "pod-subpath-test-downwardapi-wh6m": Phase="Running", Reason="", readiness=false. Elapsed: 8.057965705s
Jul 22 11:18:34.583: INFO: Pod "pod-subpath-test-downwardapi-wh6m": Phase="Running", Reason="", readiness=false. Elapsed: 10.062178685s
Jul 22 11:18:36.587: INFO: Pod "pod-subpath-test-downwardapi-wh6m": Phase="Running", Reason="", readiness=false. Elapsed: 12.065978897s
Jul 22 11:18:38.591: INFO: Pod "pod-subpath-test-downwardapi-wh6m": Phase="Running", Reason="", readiness=false. Elapsed: 14.069288429s
Jul 22 11:18:40.595: INFO: Pod "pod-subpath-test-downwardapi-wh6m": Phase="Running", Reason="", readiness=false. Elapsed: 16.073635311s
Jul 22 11:18:42.599: INFO: Pod "pod-subpath-test-downwardapi-wh6m": Phase="Running", Reason="", readiness=false. Elapsed: 18.077921533s
Jul 22 11:18:44.604: INFO: Pod "pod-subpath-test-downwardapi-wh6m": Phase="Running", Reason="", readiness=false. Elapsed: 20.082249563s
Jul 22 11:18:46.609: INFO: Pod "pod-subpath-test-downwardapi-wh6m": Phase="Running", Reason="", readiness=false. Elapsed: 22.088056453s
Jul 22 11:18:48.614: INFO: Pod "pod-subpath-test-downwardapi-wh6m": Phase="Running", Reason="", readiness=false. Elapsed: 24.092521897s
Jul 22 11:18:50.647: INFO: Pod "pod-subpath-test-downwardapi-wh6m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.125453652s
STEP: Saw pod success
Jul 22 11:18:50.647: INFO: Pod "pod-subpath-test-downwardapi-wh6m" satisfied condition "success or failure"
Jul 22 11:18:50.650: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-downwardapi-wh6m container test-container-subpath-downwardapi-wh6m: 
STEP: delete the pod
Jul 22 11:18:50.670: INFO: Waiting for pod pod-subpath-test-downwardapi-wh6m to disappear
Jul 22 11:18:50.690: INFO: Pod pod-subpath-test-downwardapi-wh6m no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-wh6m
Jul 22 11:18:50.690: INFO: Deleting pod "pod-subpath-test-downwardapi-wh6m" in namespace "e2e-tests-subpath-kbxxt"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:18:50.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-kbxxt" for this suite.
Jul 22 11:18:56.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:18:56.980: INFO: namespace: e2e-tests-subpath-kbxxt, resource: bindings, ignored listing per whitelist
Jul 22 11:18:56.987: INFO: namespace e2e-tests-subpath-kbxxt deletion completed in 6.291083772s

• [SLOW TEST:32.607 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:18:56.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-22546ebd-cc0d-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 22 11:18:57.094: INFO: Waiting up to 5m0s for pod "pod-configmaps-225532a2-cc0d-11ea-aa05-0242ac11000b" in namespace "e2e-tests-configmap-lqsc7" to be "success or failure"
Jul 22 11:18:57.156: INFO: Pod "pod-configmaps-225532a2-cc0d-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 62.364574ms
Jul 22 11:18:59.159: INFO: Pod "pod-configmaps-225532a2-cc0d-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065479216s
Jul 22 11:19:01.164: INFO: Pod "pod-configmaps-225532a2-cc0d-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06982455s
STEP: Saw pod success
Jul 22 11:19:01.164: INFO: Pod "pod-configmaps-225532a2-cc0d-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:19:01.167: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-225532a2-cc0d-11ea-aa05-0242ac11000b container configmap-volume-test: 
STEP: delete the pod
Jul 22 11:19:01.184: INFO: Waiting for pod pod-configmaps-225532a2-cc0d-11ea-aa05-0242ac11000b to disappear
Jul 22 11:19:01.188: INFO: Pod pod-configmaps-225532a2-cc0d-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:19:01.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-lqsc7" for this suite.
Jul 22 11:19:07.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:19:07.321: INFO: namespace: e2e-tests-configmap-lqsc7, resource: bindings, ignored listing per whitelist
Jul 22 11:19:07.340: INFO: namespace e2e-tests-configmap-lqsc7 deletion completed in 6.148677432s

• [SLOW TEST:10.352 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:19:07.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-b2pdb/configmap-test-287d530d-cc0d-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 22 11:19:07.444: INFO: Waiting up to 5m0s for pod "pod-configmaps-287f5c6c-cc0d-11ea-aa05-0242ac11000b" in namespace "e2e-tests-configmap-b2pdb" to be "success or failure"
Jul 22 11:19:07.447: INFO: Pod "pod-configmaps-287f5c6c-cc0d-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.346885ms
Jul 22 11:19:09.451: INFO: Pod "pod-configmaps-287f5c6c-cc0d-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007545284s
Jul 22 11:19:11.460: INFO: Pod "pod-configmaps-287f5c6c-cc0d-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015862353s
STEP: Saw pod success
Jul 22 11:19:11.460: INFO: Pod "pod-configmaps-287f5c6c-cc0d-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:19:11.462: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-287f5c6c-cc0d-11ea-aa05-0242ac11000b container env-test: 
STEP: delete the pod
Jul 22 11:19:11.513: INFO: Waiting for pod pod-configmaps-287f5c6c-cc0d-11ea-aa05-0242ac11000b to disappear
Jul 22 11:19:11.525: INFO: Pod pod-configmaps-287f5c6c-cc0d-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:19:11.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-b2pdb" for this suite.
Jul 22 11:19:17.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:19:17.569: INFO: namespace: e2e-tests-configmap-b2pdb, resource: bindings, ignored listing per whitelist
Jul 22 11:19:17.617: INFO: namespace e2e-tests-configmap-b2pdb deletion completed in 6.087281673s

• [SLOW TEST:10.277 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:19:17.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-m4sqr
Jul 22 11:19:21.763: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-m4sqr
STEP: checking the pod's current state and verifying that restartCount is present
Jul 22 11:19:21.766: INFO: Initial restart count of pod liveness-http is 0
Jul 22 11:19:35.795: INFO: Restart count of pod e2e-tests-container-probe-m4sqr/liveness-http is now 1 (14.028416566s elapsed)
Jul 22 11:19:55.896: INFO: Restart count of pod e2e-tests-container-probe-m4sqr/liveness-http is now 2 (34.129520378s elapsed)
Jul 22 11:20:15.936: INFO: Restart count of pod e2e-tests-container-probe-m4sqr/liveness-http is now 3 (54.169567936s elapsed)
Jul 22 11:20:34.126: INFO: Restart count of pod e2e-tests-container-probe-m4sqr/liveness-http is now 4 (1m12.359967205s elapsed)
Jul 22 11:21:44.431: INFO: Restart count of pod e2e-tests-container-probe-m4sqr/liveness-http is now 5 (2m22.665289561s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:21:44.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-m4sqr" for this suite.
Jul 22 11:21:50.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:21:50.548: INFO: namespace: e2e-tests-container-probe-m4sqr, resource: bindings, ignored listing per whitelist
Jul 22 11:21:50.550: INFO: namespace e2e-tests-container-probe-m4sqr deletion completed in 6.099157834s

• [SLOW TEST:152.933 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:21:50.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 22 11:21:50.746: INFO: Waiting up to 5m0s for pod "downwardapi-volume-89d37f6c-cc0d-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-gh4l2" to be "success or failure"
Jul 22 11:21:50.768: INFO: Pod "downwardapi-volume-89d37f6c-cc0d-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.661551ms
Jul 22 11:21:52.772: INFO: Pod "downwardapi-volume-89d37f6c-cc0d-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025735022s
Jul 22 11:21:54.776: INFO: Pod "downwardapi-volume-89d37f6c-cc0d-11ea-aa05-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.029965251s
Jul 22 11:21:56.780: INFO: Pod "downwardapi-volume-89d37f6c-cc0d-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033958936s
STEP: Saw pod success
Jul 22 11:21:56.780: INFO: Pod "downwardapi-volume-89d37f6c-cc0d-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:21:56.783: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-89d37f6c-cc0d-11ea-aa05-0242ac11000b container client-container: 
STEP: delete the pod
Jul 22 11:21:56.805: INFO: Waiting for pod downwardapi-volume-89d37f6c-cc0d-11ea-aa05-0242ac11000b to disappear
Jul 22 11:21:56.809: INFO: Pod downwardapi-volume-89d37f6c-cc0d-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:21:56.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-gh4l2" for this suite.
Jul 22 11:22:02.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:22:02.892: INFO: namespace: e2e-tests-projected-gh4l2, resource: bindings, ignored listing per whitelist
Jul 22 11:22:02.917: INFO: namespace e2e-tests-projected-gh4l2 deletion completed in 6.103985856s

• [SLOW TEST:12.367 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:22:02.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jul 22 11:22:03.513: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-m8brk,SelfLink:/api/v1/namespaces/e2e-tests-watch-m8brk/configmaps/e2e-watch-test-watch-closed,UID:914176ef-cc0d-11ea-b2c9-0242ac120008,ResourceVersion:2177222,Generation:0,CreationTimestamp:2020-07-22 11:22:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 22 11:22:03.513: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-m8brk,SelfLink:/api/v1/namespaces/e2e-tests-watch-m8brk/configmaps/e2e-watch-test-watch-closed,UID:914176ef-cc0d-11ea-b2c9-0242ac120008,ResourceVersion:2177224,Generation:0,CreationTimestamp:2020-07-22 11:22:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jul 22 11:22:03.529: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-m8brk,SelfLink:/api/v1/namespaces/e2e-tests-watch-m8brk/configmaps/e2e-watch-test-watch-closed,UID:914176ef-cc0d-11ea-b2c9-0242ac120008,ResourceVersion:2177225,Generation:0,CreationTimestamp:2020-07-22 11:22:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 22 11:22:03.529: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-m8brk,SelfLink:/api/v1/namespaces/e2e-tests-watch-m8brk/configmaps/e2e-watch-test-watch-closed,UID:914176ef-cc0d-11ea-b2c9-0242ac120008,ResourceVersion:2177226,Generation:0,CreationTimestamp:2020-07-22 11:22:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:22:03.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-m8brk" for this suite.
Jul 22 11:22:09.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:22:09.600: INFO: namespace: e2e-tests-watch-m8brk, resource: bindings, ignored listing per whitelist
Jul 22 11:22:09.618: INFO: namespace e2e-tests-watch-m8brk deletion completed in 6.070931365s

• [SLOW TEST:6.700 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:22:09.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul 22 11:22:09.733: INFO: Waiting up to 5m0s for pod "pod-9527c33c-cc0d-11ea-aa05-0242ac11000b" in namespace "e2e-tests-emptydir-2wxcb" to be "success or failure"
Jul 22 11:22:09.762: INFO: Pod "pod-9527c33c-cc0d-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 29.41659ms
Jul 22 11:22:11.788: INFO: Pod "pod-9527c33c-cc0d-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05472709s
Jul 22 11:22:14.039: INFO: Pod "pod-9527c33c-cc0d-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.305920349s
STEP: Saw pod success
Jul 22 11:22:14.039: INFO: Pod "pod-9527c33c-cc0d-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:22:14.041: INFO: Trying to get logs from node hunter-worker pod pod-9527c33c-cc0d-11ea-aa05-0242ac11000b container test-container: 
STEP: delete the pod
Jul 22 11:22:14.123: INFO: Waiting for pod pod-9527c33c-cc0d-11ea-aa05-0242ac11000b to disappear
Jul 22 11:22:14.207: INFO: Pod pod-9527c33c-cc0d-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:22:14.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-2wxcb" for this suite.
Jul 22 11:22:20.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:22:20.318: INFO: namespace: e2e-tests-emptydir-2wxcb, resource: bindings, ignored listing per whitelist
Jul 22 11:22:20.327: INFO: namespace e2e-tests-emptydir-2wxcb deletion completed in 6.115323034s

• [SLOW TEST:10.709 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:22:20.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 22 11:22:20.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version --client'
Jul 22 11:22:20.498: INFO: stderr: ""
Jul 22 11:22:20.498: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.12\", GitCommit:\"a8b52209ee172232b6db7a6e0ce2adc77458829f\", GitTreeState:\"clean\", BuildDate:\"2020-07-10T10:25:27Z\", GoVersion:\"go1.11.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
Jul 22 11:22:20.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zczqr'
Jul 22 11:22:23.618: INFO: stderr: ""
Jul 22 11:22:23.618: INFO: stdout: "replicationcontroller/redis-master created\n"
Jul 22 11:22:23.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zczqr'
Jul 22 11:22:23.902: INFO: stderr: ""
Jul 22 11:22:23.902: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Jul 22 11:22:24.906: INFO: Selector matched 1 pods for map[app:redis]
Jul 22 11:22:24.906: INFO: Found 0 / 1
Jul 22 11:22:25.980: INFO: Selector matched 1 pods for map[app:redis]
Jul 22 11:22:25.980: INFO: Found 0 / 1
Jul 22 11:22:26.907: INFO: Selector matched 1 pods for map[app:redis]
Jul 22 11:22:26.907: INFO: Found 0 / 1
Jul 22 11:22:27.986: INFO: Selector matched 1 pods for map[app:redis]
Jul 22 11:22:27.986: INFO: Found 1 / 1
Jul 22 11:22:27.986: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul 22 11:22:27.989: INFO: Selector matched 1 pods for map[app:redis]
Jul 22 11:22:27.990: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul 22 11:22:27.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-vpfrm --namespace=e2e-tests-kubectl-zczqr'
Jul 22 11:22:28.113: INFO: stderr: ""
Jul 22 11:22:28.113: INFO: stdout: "Name:               redis-master-vpfrm\nNamespace:          e2e-tests-kubectl-zczqr\nPriority:           0\nPriorityClassName:  \nNode:               hunter-worker/172.18.0.4\nStart Time:         Wed, 22 Jul 2020 11:22:23 +0000\nLabels:             app=redis\n                    role=master\nAnnotations:        \nStatus:             Running\nIP:                 10.244.2.67\nControlled By:      ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://92e8697861dc274caf417c2c0c6031f898b8f0a54b1e76415fdbe452a7e90102\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 22 Jul 2020 11:22:26 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-pgtng (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-pgtng:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-pgtng\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                    Message\n  ----    ------     ----  ----                    -------\n  Normal  Scheduled  5s    default-scheduler       Successfully assigned e2e-tests-kubectl-zczqr/redis-master-vpfrm to hunter-worker\n  Normal  Pulled     3s    kubelet, hunter-worker  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, hunter-worker  Created container\n  Normal  Started    2s    kubelet, hunter-worker  Started container\n"
Jul 22 11:22:28.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=e2e-tests-kubectl-zczqr'
Jul 22 11:22:28.368: INFO: stderr: ""
Jul 22 11:22:28.368: INFO: stdout: "Name:         redis-master\nNamespace:    e2e-tests-kubectl-zczqr\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: redis-master-vpfrm\n"
Jul 22 11:22:28.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=e2e-tests-kubectl-zczqr'
Jul 22 11:22:28.481: INFO: stderr: ""
Jul 22 11:22:28.481: INFO: stdout: "Name:              redis-master\nNamespace:         e2e-tests-kubectl-zczqr\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.98.154.56\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.2.67:6379\nSession Affinity:  None\nEvents:            \n"
Jul 22 11:22:28.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node hunter-control-plane'
Jul 22 11:22:28.627: INFO: stderr: ""
Jul 22 11:22:28.627: INFO: stdout: "Name:               hunter-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/hostname=hunter-control-plane\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Fri, 10 Jul 2020 10:22:18 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Wed, 22 Jul 2020 11:22:26 +0000   Fri, 10 Jul 2020 10:22:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Wed, 22 Jul 2020 11:22:26 +0000   Fri, 10 Jul 2020 10:22:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Wed, 22 Jul 2020 11:22:26 +0000   Fri, 10 Jul 2020 10:22:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Wed, 22 Jul 2020 11:22:26 +0000   Fri, 10 Jul 2020 10:23:08 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.8\n  Hostname:    hunter-control-plane\nCapacity:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nSystem Info:\n Machine ID:                 86b921187bcd42a69301f53c2d21b8f0\n System UUID:                dbd65bbc-7a27-4b36-b69e-be53f27cba5c\n Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version:             4.15.0-109-generic\n OS Image:                   Ubuntu 19.10\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.3.3-14-g449e9269\n Kubelet Version:            v1.13.12\n Kube-Proxy Version:         v1.13.12\nPodCIDR:                     10.244.0.0/24\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                            ------------  ----------  ---------------  -------------  ---\n  kube-system                coredns-54ff9cd656-46fs4                        100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     12d\n  kube-system                coredns-54ff9cd656-gzt7d                        100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     12d\n  kube-system                etcd-hunter-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12d\n  kube-system                kindnet-r4bfs                                   100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      12d\n  kube-system                kube-apiserver-hunter-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         12d\n  kube-system                kube-controller-manager-hunter-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         12d\n  kube-system                kube-proxy-4jv56                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12d\n  kube-system                kube-scheduler-hunter-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         12d\n  local-path-storage         local-path-provisioner-674595c7-jw5rw           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Jul 22 11:22:28.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace e2e-tests-kubectl-zczqr'
Jul 22 11:22:28.728: INFO: stderr: ""
Jul 22 11:22:28.728: INFO: stdout: "Name:         e2e-tests-kubectl-zczqr\nLabels:       e2e-framework=kubectl\n              e2e-run=aed10f48-cc08-11ea-aa05-0242ac11000b\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:22:28.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zczqr" for this suite.
Jul 22 11:23:02.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:23:02.787: INFO: namespace: e2e-tests-kubectl-zczqr, resource: bindings, ignored listing per whitelist
Jul 22 11:23:02.834: INFO: namespace e2e-tests-kubectl-zczqr deletion completed in 34.103296694s

• [SLOW TEST:42.507 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:23:02.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul 22 11:23:07.563: INFO: Successfully updated pod "pod-update-b4e59f4d-cc0d-11ea-aa05-0242ac11000b"
STEP: verifying the updated pod is in kubernetes
Jul 22 11:23:07.593: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:23:07.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-995rb" for this suite.
Jul 22 11:23:29.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:23:29.839: INFO: namespace: e2e-tests-pods-995rb, resource: bindings, ignored listing per whitelist
Jul 22 11:23:29.855: INFO: namespace e2e-tests-pods-995rb deletion completed in 22.258567955s

• [SLOW TEST:27.021 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:23:29.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jul 22 11:23:30.159: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zwr54,SelfLink:/api/v1/namespaces/e2e-tests-watch-zwr54/configmaps/e2e-watch-test-configmap-a,UID:c51653e6-cc0d-11ea-b2c9-0242ac120008,ResourceVersion:2177490,Generation:0,CreationTimestamp:2020-07-22 11:23:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 22 11:23:30.159: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zwr54,SelfLink:/api/v1/namespaces/e2e-tests-watch-zwr54/configmaps/e2e-watch-test-configmap-a,UID:c51653e6-cc0d-11ea-b2c9-0242ac120008,ResourceVersion:2177490,Generation:0,CreationTimestamp:2020-07-22 11:23:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jul 22 11:23:40.166: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zwr54,SelfLink:/api/v1/namespaces/e2e-tests-watch-zwr54/configmaps/e2e-watch-test-configmap-a,UID:c51653e6-cc0d-11ea-b2c9-0242ac120008,ResourceVersion:2177509,Generation:0,CreationTimestamp:2020-07-22 11:23:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jul 22 11:23:40.166: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zwr54,SelfLink:/api/v1/namespaces/e2e-tests-watch-zwr54/configmaps/e2e-watch-test-configmap-a,UID:c51653e6-cc0d-11ea-b2c9-0242ac120008,ResourceVersion:2177509,Generation:0,CreationTimestamp:2020-07-22 11:23:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jul 22 11:23:50.174: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zwr54,SelfLink:/api/v1/namespaces/e2e-tests-watch-zwr54/configmaps/e2e-watch-test-configmap-a,UID:c51653e6-cc0d-11ea-b2c9-0242ac120008,ResourceVersion:2177529,Generation:0,CreationTimestamp:2020-07-22 11:23:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 22 11:23:50.174: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zwr54,SelfLink:/api/v1/namespaces/e2e-tests-watch-zwr54/configmaps/e2e-watch-test-configmap-a,UID:c51653e6-cc0d-11ea-b2c9-0242ac120008,ResourceVersion:2177529,Generation:0,CreationTimestamp:2020-07-22 11:23:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jul 22 11:24:00.181: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zwr54,SelfLink:/api/v1/namespaces/e2e-tests-watch-zwr54/configmaps/e2e-watch-test-configmap-a,UID:c51653e6-cc0d-11ea-b2c9-0242ac120008,ResourceVersion:2177549,Generation:0,CreationTimestamp:2020-07-22 11:23:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 22 11:24:00.182: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-zwr54,SelfLink:/api/v1/namespaces/e2e-tests-watch-zwr54/configmaps/e2e-watch-test-configmap-a,UID:c51653e6-cc0d-11ea-b2c9-0242ac120008,ResourceVersion:2177549,Generation:0,CreationTimestamp:2020-07-22 11:23:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jul 22 11:24:10.189: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-zwr54,SelfLink:/api/v1/namespaces/e2e-tests-watch-zwr54/configmaps/e2e-watch-test-configmap-b,UID:dcf3f59b-cc0d-11ea-b2c9-0242ac120008,ResourceVersion:2177569,Generation:0,CreationTimestamp:2020-07-22 11:24:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 22 11:24:10.189: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-zwr54,SelfLink:/api/v1/namespaces/e2e-tests-watch-zwr54/configmaps/e2e-watch-test-configmap-b,UID:dcf3f59b-cc0d-11ea-b2c9-0242ac120008,ResourceVersion:2177569,Generation:0,CreationTimestamp:2020-07-22 11:24:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jul 22 11:24:20.195: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-zwr54,SelfLink:/api/v1/namespaces/e2e-tests-watch-zwr54/configmaps/e2e-watch-test-configmap-b,UID:dcf3f59b-cc0d-11ea-b2c9-0242ac120008,ResourceVersion:2177589,Generation:0,CreationTimestamp:2020-07-22 11:24:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 22 11:24:20.195: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-zwr54,SelfLink:/api/v1/namespaces/e2e-tests-watch-zwr54/configmaps/e2e-watch-test-configmap-b,UID:dcf3f59b-cc0d-11ea-b2c9-0242ac120008,ResourceVersion:2177589,Generation:0,CreationTimestamp:2020-07-22 11:24:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:24:30.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-zwr54" for this suite.
Jul 22 11:24:36.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:24:36.285: INFO: namespace: e2e-tests-watch-zwr54, resource: bindings, ignored listing per whitelist
Jul 22 11:24:36.296: INFO: namespace e2e-tests-watch-zwr54 deletion completed in 6.091863618s

• [SLOW TEST:66.441 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:24:36.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-l6t5f.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-l6t5f.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-l6t5f.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-l6t5f.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-l6t5f.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-l6t5f.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 22 11:24:48.846: INFO: DNS probes using e2e-tests-dns-l6t5f/dns-test-ecbe16e0-cc0d-11ea-aa05-0242ac11000b succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:24:48.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-l6t5f" for this suite.
Jul 22 11:24:55.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:24:55.555: INFO: namespace: e2e-tests-dns-l6t5f, resource: bindings, ignored listing per whitelist
Jul 22 11:24:55.580: INFO: namespace e2e-tests-dns-l6t5f deletion completed in 6.554641145s

• [SLOW TEST:19.284 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:24:55.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-f81790c1-cc0d-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume secrets
Jul 22 11:24:55.834: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f819ccd6-cc0d-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-hcsww" to be "success or failure"
Jul 22 11:24:55.858: INFO: Pod "pod-projected-secrets-f819ccd6-cc0d-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.521137ms
Jul 22 11:24:57.879: INFO: Pod "pod-projected-secrets-f819ccd6-cc0d-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045111648s
Jul 22 11:24:59.883: INFO: Pod "pod-projected-secrets-f819ccd6-cc0d-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048834095s
STEP: Saw pod success
Jul 22 11:24:59.883: INFO: Pod "pod-projected-secrets-f819ccd6-cc0d-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:24:59.886: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-f819ccd6-cc0d-11ea-aa05-0242ac11000b container projected-secret-volume-test: 
STEP: delete the pod
Jul 22 11:24:59.905: INFO: Waiting for pod pod-projected-secrets-f819ccd6-cc0d-11ea-aa05-0242ac11000b to disappear
Jul 22 11:24:59.909: INFO: Pod pod-projected-secrets-f819ccd6-cc0d-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:24:59.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-hcsww" for this suite.
Jul 22 11:25:05.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:25:05.958: INFO: namespace: e2e-tests-projected-hcsww, resource: bindings, ignored listing per whitelist
Jul 22 11:25:06.006: INFO: namespace e2e-tests-projected-hcsww deletion completed in 6.093468855s

• [SLOW TEST:10.425 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:25:06.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:25:06.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-pk5sd" for this suite.
Jul 22 11:25:12.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:25:12.300: INFO: namespace: e2e-tests-kubelet-test-pk5sd, resource: bindings, ignored listing per whitelist
Jul 22 11:25:12.358: INFO: namespace e2e-tests-kubelet-test-pk5sd deletion completed in 6.100638585s

• [SLOW TEST:6.352 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:25:12.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap e2e-tests-configmap-p4sqv/configmap-test-0214b6b0-cc0e-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 22 11:25:12.501: INFO: Waiting up to 5m0s for pod "pod-configmaps-0215c0e9-cc0e-11ea-aa05-0242ac11000b" in namespace "e2e-tests-configmap-p4sqv" to be "success or failure"
Jul 22 11:25:12.551: INFO: Pod "pod-configmaps-0215c0e9-cc0e-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 49.569026ms
Jul 22 11:25:14.826: INFO: Pod "pod-configmaps-0215c0e9-cc0e-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.325259982s
Jul 22 11:25:16.830: INFO: Pod "pod-configmaps-0215c0e9-cc0e-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.329279568s
Jul 22 11:25:18.834: INFO: Pod "pod-configmaps-0215c0e9-cc0e-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.333257201s
STEP: Saw pod success
Jul 22 11:25:18.834: INFO: Pod "pod-configmaps-0215c0e9-cc0e-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:25:18.837: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-0215c0e9-cc0e-11ea-aa05-0242ac11000b container env-test: 
STEP: delete the pod
Jul 22 11:25:18.857: INFO: Waiting for pod pod-configmaps-0215c0e9-cc0e-11ea-aa05-0242ac11000b to disappear
Jul 22 11:25:18.879: INFO: Pod pod-configmaps-0215c0e9-cc0e-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:25:18.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-p4sqv" for this suite.
Jul 22 11:25:24.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:25:24.985: INFO: namespace: e2e-tests-configmap-p4sqv, resource: bindings, ignored listing per whitelist
Jul 22 11:25:25.008: INFO: namespace e2e-tests-configmap-p4sqv deletion completed in 6.125419953s

• [SLOW TEST:12.650 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:25:25.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jul 22 11:25:31.480: INFO: 10 pods remaining
Jul 22 11:25:31.480: INFO: 10 pods has nil DeletionTimestamp
Jul 22 11:25:31.480: INFO: 
Jul 22 11:25:33.261: INFO: 7 pods remaining
Jul 22 11:25:33.261: INFO: 0 pods has nil DeletionTimestamp
Jul 22 11:25:33.261: INFO: 
STEP: Gathering metrics
W0722 11:25:34.923743       7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 22 11:25:34.923: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:25:34.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-m8ngt" for this suite.
Jul 22 11:25:41.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:25:41.610: INFO: namespace: e2e-tests-gc-m8ngt, resource: bindings, ignored listing per whitelist
Jul 22 11:25:41.629: INFO: namespace e2e-tests-gc-m8ngt deletion completed in 6.357122584s

• [SLOW TEST:16.620 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:25:41.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating api versions
Jul 22 11:25:41.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jul 22 11:25:41.914: INFO: stderr: ""
Jul 22 11:25:41.914: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:25:41.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-2c5kc" for this suite.
Jul 22 11:25:47.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:25:47.949: INFO: namespace: e2e-tests-kubectl-2c5kc, resource: bindings, ignored listing per whitelist
Jul 22 11:25:48.014: INFO: namespace e2e-tests-kubectl-2c5kc deletion completed in 6.095519645s

• [SLOW TEST:6.385 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:25:48.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 22 11:25:48.129: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1752be9f-cc0e-11ea-aa05-0242ac11000b" in namespace "e2e-tests-downward-api-9vr4m" to be "success or failure"
Jul 22 11:25:48.186: INFO: Pod "downwardapi-volume-1752be9f-cc0e-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 57.427074ms
Jul 22 11:25:50.191: INFO: Pod "downwardapi-volume-1752be9f-cc0e-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061803245s
Jul 22 11:25:52.195: INFO: Pod "downwardapi-volume-1752be9f-cc0e-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065868668s
STEP: Saw pod success
Jul 22 11:25:52.195: INFO: Pod "downwardapi-volume-1752be9f-cc0e-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:25:52.197: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-1752be9f-cc0e-11ea-aa05-0242ac11000b container client-container: 
STEP: delete the pod
Jul 22 11:25:52.261: INFO: Waiting for pod downwardapi-volume-1752be9f-cc0e-11ea-aa05-0242ac11000b to disappear
Jul 22 11:25:52.282: INFO: Pod downwardapi-volume-1752be9f-cc0e-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:25:52.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-9vr4m" for this suite.
Jul 22 11:25:58.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:25:58.377: INFO: namespace: e2e-tests-downward-api-9vr4m, resource: bindings, ignored listing per whitelist
Jul 22 11:25:58.392: INFO: namespace e2e-tests-downward-api-9vr4m deletion completed in 6.105634938s

• [SLOW TEST:10.377 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:25:58.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-1d876099-cc0e-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume secrets
Jul 22 11:25:58.547: INFO: Waiting up to 5m0s for pod "pod-secrets-1d87f3d8-cc0e-11ea-aa05-0242ac11000b" in namespace "e2e-tests-secrets-l82xv" to be "success or failure"
Jul 22 11:25:58.558: INFO: Pod "pod-secrets-1d87f3d8-cc0e-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.230394ms
Jul 22 11:26:00.562: INFO: Pod "pod-secrets-1d87f3d8-cc0e-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015011542s
Jul 22 11:26:02.566: INFO: Pod "pod-secrets-1d87f3d8-cc0e-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019362782s
STEP: Saw pod success
Jul 22 11:26:02.566: INFO: Pod "pod-secrets-1d87f3d8-cc0e-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:26:02.569: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-1d87f3d8-cc0e-11ea-aa05-0242ac11000b container secret-volume-test: 
STEP: delete the pod
Jul 22 11:26:02.601: INFO: Waiting for pod pod-secrets-1d87f3d8-cc0e-11ea-aa05-0242ac11000b to disappear
Jul 22 11:26:02.612: INFO: Pod pod-secrets-1d87f3d8-cc0e-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:26:02.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-l82xv" for this suite.
Jul 22 11:26:08.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:26:08.680: INFO: namespace: e2e-tests-secrets-l82xv, resource: bindings, ignored listing per whitelist
Jul 22 11:26:08.694: INFO: namespace e2e-tests-secrets-l82xv deletion completed in 6.079022128s

• [SLOW TEST:10.302 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:26:08.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 22 11:26:08.793: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23a43a97-cc0e-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-cd7zc" to be "success or failure"
Jul 22 11:26:08.798: INFO: Pod "downwardapi-volume-23a43a97-cc0e-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181664ms
Jul 22 11:26:10.802: INFO: Pod "downwardapi-volume-23a43a97-cc0e-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008383257s
Jul 22 11:26:12.807: INFO: Pod "downwardapi-volume-23a43a97-cc0e-11ea-aa05-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.013039875s
Jul 22 11:26:14.811: INFO: Pod "downwardapi-volume-23a43a97-cc0e-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01758339s
STEP: Saw pod success
Jul 22 11:26:14.811: INFO: Pod "downwardapi-volume-23a43a97-cc0e-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:26:14.814: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-23a43a97-cc0e-11ea-aa05-0242ac11000b container client-container: 
STEP: delete the pod
Jul 22 11:26:14.871: INFO: Waiting for pod downwardapi-volume-23a43a97-cc0e-11ea-aa05-0242ac11000b to disappear
Jul 22 11:26:14.882: INFO: Pod downwardapi-volume-23a43a97-cc0e-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:26:14.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-cd7zc" for this suite.
Jul 22 11:26:20.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:26:21.051: INFO: namespace: e2e-tests-projected-cd7zc, resource: bindings, ignored listing per whitelist
Jul 22 11:26:21.153: INFO: namespace e2e-tests-projected-cd7zc deletion completed in 6.267749429s

• [SLOW TEST:12.458 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:26:21.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Starting the proxy
Jul 22 11:26:21.328: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix646358245/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:26:21.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-l5zsr" for this suite.
Jul 22 11:26:27.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:26:27.569: INFO: namespace: e2e-tests-kubectl-l5zsr, resource: bindings, ignored listing per whitelist
Jul 22 11:26:27.592: INFO: namespace e2e-tests-kubectl-l5zsr deletion completed in 6.193751278s

• [SLOW TEST:6.439 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:26:27.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jul 22 11:26:32.264: INFO: Successfully updated pod "annotationupdate2eecb769-cc0e-11ea-aa05-0242ac11000b"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:26:34.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-j9dmb" for this suite.
Jul 22 11:26:58.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:26:58.389: INFO: namespace: e2e-tests-downward-api-j9dmb, resource: bindings, ignored listing per whitelist
Jul 22 11:26:58.412: INFO: namespace e2e-tests-downward-api-j9dmb deletion completed in 24.128028324s

• [SLOW TEST:30.820 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:26:58.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-x2dv
STEP: Creating a pod to test atomic-volume-subpath
Jul 22 11:26:58.591: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-x2dv" in namespace "e2e-tests-subpath-ww4q4" to be "success or failure"
Jul 22 11:26:58.595: INFO: Pod "pod-subpath-test-configmap-x2dv": Phase="Pending", Reason="", readiness=false. Elapsed: 3.649514ms
Jul 22 11:27:00.618: INFO: Pod "pod-subpath-test-configmap-x2dv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027259192s
Jul 22 11:27:02.622: INFO: Pod "pod-subpath-test-configmap-x2dv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03077905s
Jul 22 11:27:04.702: INFO: Pod "pod-subpath-test-configmap-x2dv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110931303s
Jul 22 11:27:06.706: INFO: Pod "pod-subpath-test-configmap-x2dv": Phase="Running", Reason="", readiness=false. Elapsed: 8.1147398s
Jul 22 11:27:08.710: INFO: Pod "pod-subpath-test-configmap-x2dv": Phase="Running", Reason="", readiness=false. Elapsed: 10.119144847s
Jul 22 11:27:10.714: INFO: Pod "pod-subpath-test-configmap-x2dv": Phase="Running", Reason="", readiness=false. Elapsed: 12.122983862s
Jul 22 11:27:12.717: INFO: Pod "pod-subpath-test-configmap-x2dv": Phase="Running", Reason="", readiness=false. Elapsed: 14.126365036s
Jul 22 11:27:14.722: INFO: Pod "pod-subpath-test-configmap-x2dv": Phase="Running", Reason="", readiness=false. Elapsed: 16.130746634s
Jul 22 11:27:16.726: INFO: Pod "pod-subpath-test-configmap-x2dv": Phase="Running", Reason="", readiness=false. Elapsed: 18.134781526s
Jul 22 11:27:18.730: INFO: Pod "pod-subpath-test-configmap-x2dv": Phase="Running", Reason="", readiness=false. Elapsed: 20.138976578s
Jul 22 11:27:20.734: INFO: Pod "pod-subpath-test-configmap-x2dv": Phase="Running", Reason="", readiness=false. Elapsed: 22.142648883s
Jul 22 11:27:22.738: INFO: Pod "pod-subpath-test-configmap-x2dv": Phase="Running", Reason="", readiness=false. Elapsed: 24.146943645s
Jul 22 11:27:24.742: INFO: Pod "pod-subpath-test-configmap-x2dv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.151069986s
STEP: Saw pod success
Jul 22 11:27:24.742: INFO: Pod "pod-subpath-test-configmap-x2dv" satisfied condition "success or failure"
Jul 22 11:27:24.744: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-configmap-x2dv container test-container-subpath-configmap-x2dv: 
STEP: delete the pod
Jul 22 11:27:24.790: INFO: Waiting for pod pod-subpath-test-configmap-x2dv to disappear
Jul 22 11:27:24.799: INFO: Pod pod-subpath-test-configmap-x2dv no longer exists
STEP: Deleting pod pod-subpath-test-configmap-x2dv
Jul 22 11:27:24.799: INFO: Deleting pod "pod-subpath-test-configmap-x2dv" in namespace "e2e-tests-subpath-ww4q4"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:27:24.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-ww4q4" for this suite.
Jul 22 11:27:30.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:27:30.903: INFO: namespace: e2e-tests-subpath-ww4q4, resource: bindings, ignored listing per whitelist
Jul 22 11:27:30.910: INFO: namespace e2e-tests-subpath-ww4q4 deletion completed in 6.102403386s

• [SLOW TEST:32.497 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:27:30.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating replication controller my-hostname-basic-54a9ac5a-cc0e-11ea-aa05-0242ac11000b
Jul 22 11:27:31.038: INFO: Pod name my-hostname-basic-54a9ac5a-cc0e-11ea-aa05-0242ac11000b: Found 0 pods out of 1
Jul 22 11:27:36.042: INFO: Pod name my-hostname-basic-54a9ac5a-cc0e-11ea-aa05-0242ac11000b: Found 1 pods out of 1
Jul 22 11:27:36.042: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-54a9ac5a-cc0e-11ea-aa05-0242ac11000b" are running
Jul 22 11:27:36.045: INFO: Pod "my-hostname-basic-54a9ac5a-cc0e-11ea-aa05-0242ac11000b-88dzc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-22 11:27:31 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-22 11:27:34 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-22 11:27:34 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-22 11:27:31 +0000 UTC Reason: Message:}])
Jul 22 11:27:36.045: INFO: Trying to dial the pod
Jul 22 11:27:41.056: INFO: Controller my-hostname-basic-54a9ac5a-cc0e-11ea-aa05-0242ac11000b: Got expected result from replica 1 [my-hostname-basic-54a9ac5a-cc0e-11ea-aa05-0242ac11000b-88dzc]: "my-hostname-basic-54a9ac5a-cc0e-11ea-aa05-0242ac11000b-88dzc", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:27:41.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-89nmp" for this suite.
Jul 22 11:27:47.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:27:47.129: INFO: namespace: e2e-tests-replication-controller-89nmp, resource: bindings, ignored listing per whitelist
Jul 22 11:27:47.151: INFO: namespace e2e-tests-replication-controller-89nmp deletion completed in 6.090760606s

• [SLOW TEST:16.240 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:27:47.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-upd-5e59147c-cc0e-11ea-aa05-0242ac11000b
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:27:53.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-9vkxl" for this suite.
Jul 22 11:28:15.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:28:15.398: INFO: namespace: e2e-tests-configmap-9vkxl, resource: bindings, ignored listing per whitelist
Jul 22 11:28:15.532: INFO: namespace e2e-tests-configmap-9vkxl deletion completed in 22.195733609s

• [SLOW TEST:28.382 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:28:15.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-gmxct
Jul 22 11:28:19.646: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-gmxct
STEP: checking the pod's current state and verifying that restartCount is present
Jul 22 11:28:19.650: INFO: Initial restart count of pod liveness-http is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:32:20.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-gmxct" for this suite.
Jul 22 11:32:26.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:32:26.651: INFO: namespace: e2e-tests-container-probe-gmxct, resource: bindings, ignored listing per whitelist
Jul 22 11:32:26.678: INFO: namespace e2e-tests-container-probe-gmxct deletion completed in 6.08236735s

• [SLOW TEST:251.146 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:32:26.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jul 22 11:32:27.449: INFO: Pod name wrapped-volume-race-054f7b4d-cc0f-11ea-aa05-0242ac11000b: Found 0 pods out of 5
Jul 22 11:32:32.457: INFO: Pod name wrapped-volume-race-054f7b4d-cc0f-11ea-aa05-0242ac11000b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-054f7b4d-cc0f-11ea-aa05-0242ac11000b in namespace e2e-tests-emptydir-wrapper-p4rkg, will wait for the garbage collector to delete the pods
Jul 22 11:34:14.746: INFO: Deleting ReplicationController wrapped-volume-race-054f7b4d-cc0f-11ea-aa05-0242ac11000b took: 7.527222ms
Jul 22 11:34:14.946: INFO: Terminating ReplicationController wrapped-volume-race-054f7b4d-cc0f-11ea-aa05-0242ac11000b pods took: 200.277185ms
STEP: Creating RC which spawns configmap-volume pods
Jul 22 11:34:57.920: INFO: Pod name wrapped-volume-race-5efd7dd8-cc0f-11ea-aa05-0242ac11000b: Found 0 pods out of 5
Jul 22 11:35:02.928: INFO: Pod name wrapped-volume-race-5efd7dd8-cc0f-11ea-aa05-0242ac11000b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-5efd7dd8-cc0f-11ea-aa05-0242ac11000b in namespace e2e-tests-emptydir-wrapper-p4rkg, will wait for the garbage collector to delete the pods
Jul 22 11:36:55.012: INFO: Deleting ReplicationController wrapped-volume-race-5efd7dd8-cc0f-11ea-aa05-0242ac11000b took: 7.376217ms
Jul 22 11:36:55.112: INFO: Terminating ReplicationController wrapped-volume-race-5efd7dd8-cc0f-11ea-aa05-0242ac11000b pods took: 100.340865ms
STEP: Creating RC which spawns configmap-volume pods
Jul 22 11:37:37.843: INFO: Pod name wrapped-volume-race-be5655b6-cc0f-11ea-aa05-0242ac11000b: Found 0 pods out of 5
Jul 22 11:37:42.851: INFO: Pod name wrapped-volume-race-be5655b6-cc0f-11ea-aa05-0242ac11000b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-be5655b6-cc0f-11ea-aa05-0242ac11000b in namespace e2e-tests-emptydir-wrapper-p4rkg, will wait for the garbage collector to delete the pods
Jul 22 11:40:26.944: INFO: Deleting ReplicationController wrapped-volume-race-be5655b6-cc0f-11ea-aa05-0242ac11000b took: 7.233168ms
Jul 22 11:40:27.044: INFO: Terminating ReplicationController wrapped-volume-race-be5655b6-cc0f-11ea-aa05-0242ac11000b pods took: 100.18526ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:41:10.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-p4rkg" for this suite.
Jul 22 11:41:18.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:41:18.830: INFO: namespace: e2e-tests-emptydir-wrapper-p4rkg, resource: bindings, ignored listing per whitelist
Jul 22 11:41:18.885: INFO: namespace e2e-tests-emptydir-wrapper-p4rkg deletion completed in 8.079381535s

• [SLOW TEST:532.206 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:41:18.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jul 22 11:41:19.024: INFO: Waiting up to 5m0s for pod "downward-api-422f74b6-cc10-11ea-aa05-0242ac11000b" in namespace "e2e-tests-downward-api-tl4tx" to be "success or failure"
Jul 22 11:41:19.038: INFO: Pod "downward-api-422f74b6-cc10-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.530679ms
Jul 22 11:41:21.117: INFO: Pod "downward-api-422f74b6-cc10-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092156471s
Jul 22 11:41:23.273: INFO: Pod "downward-api-422f74b6-cc10-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.248455695s
STEP: Saw pod success
Jul 22 11:41:23.273: INFO: Pod "downward-api-422f74b6-cc10-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:41:23.276: INFO: Trying to get logs from node hunter-worker2 pod downward-api-422f74b6-cc10-11ea-aa05-0242ac11000b container dapi-container: 
STEP: delete the pod
Jul 22 11:41:23.297: INFO: Waiting for pod downward-api-422f74b6-cc10-11ea-aa05-0242ac11000b to disappear
Jul 22 11:41:23.302: INFO: Pod downward-api-422f74b6-cc10-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:41:23.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-tl4tx" for this suite.
Jul 22 11:41:29.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:41:29.410: INFO: namespace: e2e-tests-downward-api-tl4tx, resource: bindings, ignored listing per whitelist
Jul 22 11:41:29.442: INFO: namespace e2e-tests-downward-api-tl4tx deletion completed in 6.136336538s

• [SLOW TEST:10.557 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:41:29.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 22 11:41:30.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-8crd8'
Jul 22 11:41:34.698: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 22 11:41:34.698: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jul 22 11:41:34.711: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Jul 22 11:41:34.722: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jul 22 11:41:34.750: INFO: scanned /root for discovery docs: 
Jul 22 11:41:34.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-8crd8'
Jul 22 11:41:50.679: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jul 22 11:41:50.679: INFO: stdout: "Created e2e-test-nginx-rc-c32eda13f1288c4bda4bf2ca6f7b1e82\nScaling up e2e-test-nginx-rc-c32eda13f1288c4bda4bf2ca6f7b1e82 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-c32eda13f1288c4bda4bf2ca6f7b1e82 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-c32eda13f1288c4bda4bf2ca6f7b1e82 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jul 22 11:41:50.679: INFO: stdout: "Created e2e-test-nginx-rc-c32eda13f1288c4bda4bf2ca6f7b1e82\nScaling up e2e-test-nginx-rc-c32eda13f1288c4bda4bf2ca6f7b1e82 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-c32eda13f1288c4bda4bf2ca6f7b1e82 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-c32eda13f1288c4bda4bf2ca6f7b1e82 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jul 22 11:41:50.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-8crd8'
Jul 22 11:41:50.796: INFO: stderr: ""
Jul 22 11:41:50.796: INFO: stdout: "e2e-test-nginx-rc-c32eda13f1288c4bda4bf2ca6f7b1e82-pdd6r "
Jul 22 11:41:50.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-c32eda13f1288c4bda4bf2ca6f7b1e82-pdd6r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8crd8'
Jul 22 11:41:50.887: INFO: stderr: ""
Jul 22 11:41:50.887: INFO: stdout: "true"
Jul 22 11:41:50.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-c32eda13f1288c4bda4bf2ca6f7b1e82-pdd6r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8crd8'
Jul 22 11:41:50.984: INFO: stderr: ""
Jul 22 11:41:50.984: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jul 22 11:41:50.984: INFO: e2e-test-nginx-rc-c32eda13f1288c4bda4bf2ca6f7b1e82-pdd6r is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
Jul 22 11:41:50.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-8crd8'
Jul 22 11:41:51.163: INFO: stderr: ""
Jul 22 11:41:51.163: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:41:51.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-8crd8" for this suite.
Jul 22 11:42:13.187: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:42:13.196: INFO: namespace: e2e-tests-kubectl-8crd8, resource: bindings, ignored listing per whitelist
Jul 22 11:42:13.300: INFO: namespace e2e-tests-kubectl-8crd8 deletion completed in 22.132561855s

• [SLOW TEST:43.858 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:42:13.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul 22 11:42:13.468: INFO: Waiting up to 5m0s for pod "pod-62a0a349-cc10-11ea-aa05-0242ac11000b" in namespace "e2e-tests-emptydir-jls87" to be "success or failure"
Jul 22 11:42:13.484: INFO: Pod "pod-62a0a349-cc10-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.916812ms
Jul 22 11:42:15.488: INFO: Pod "pod-62a0a349-cc10-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019546271s
Jul 22 11:42:17.492: INFO: Pod "pod-62a0a349-cc10-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023531436s
STEP: Saw pod success
Jul 22 11:42:17.492: INFO: Pod "pod-62a0a349-cc10-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:42:17.495: INFO: Trying to get logs from node hunter-worker pod pod-62a0a349-cc10-11ea-aa05-0242ac11000b container test-container: 
STEP: delete the pod
Jul 22 11:42:17.627: INFO: Waiting for pod pod-62a0a349-cc10-11ea-aa05-0242ac11000b to disappear
Jul 22 11:42:17.758: INFO: Pod pod-62a0a349-cc10-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:42:17.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-jls87" for this suite.
Jul 22 11:42:23.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:42:23.857: INFO: namespace: e2e-tests-emptydir-jls87, resource: bindings, ignored listing per whitelist
Jul 22 11:42:23.882: INFO: namespace e2e-tests-emptydir-jls87 deletion completed in 6.118639216s

• [SLOW TEST:10.581 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:42:23.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 22 11:42:23.993: INFO: Requires at least 2 nodes (not -1)
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
Jul 22 11:42:24.017: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-ldhr6/daemonsets","resourceVersion":"2180691"},"items":null}

Jul 22 11:42:24.019: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-ldhr6/pods","resourceVersion":"2180691"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:42:24.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-ldhr6" for this suite.
Jul 22 11:42:30.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:42:30.060: INFO: namespace: e2e-tests-daemonsets-ldhr6, resource: bindings, ignored listing per whitelist
Jul 22 11:42:30.118: INFO: namespace e2e-tests-daemonsets-ldhr6 deletion completed in 6.0862683s

S [SKIPPING] [6.235 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should rollback without unnecessary restarts [Conformance] [It]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699

  Jul 22 11:42:23.993: Requires at least 2 nodes (not -1)

  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
------------------------------
SSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:42:30.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating service multi-endpoint-test in namespace e2e-tests-services-4jjrq
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-4jjrq to expose endpoints map[]
Jul 22 11:42:30.299: INFO: Get endpoints failed (26.701882ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jul 22 11:42:31.302: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-4jjrq exposes endpoints map[] (1.029567268s elapsed)
STEP: Creating pod pod1 in namespace e2e-tests-services-4jjrq
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-4jjrq to expose endpoints map[pod1:[100]]
Jul 22 11:42:35.527: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-4jjrq exposes endpoints map[pod1:[100]] (4.219904172s elapsed)
STEP: Creating pod pod2 in namespace e2e-tests-services-4jjrq
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-4jjrq to expose endpoints map[pod1:[100] pod2:[101]]
Jul 22 11:42:38.587: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-4jjrq exposes endpoints map[pod1:[100] pod2:[101]] (3.055571122s elapsed)
STEP: Deleting pod pod1 in namespace e2e-tests-services-4jjrq
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-4jjrq to expose endpoints map[pod2:[101]]
Jul 22 11:42:39.665: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-4jjrq exposes endpoints map[pod2:[101]] (1.073480947s elapsed)
STEP: Deleting pod pod2 in namespace e2e-tests-services-4jjrq
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-4jjrq to expose endpoints map[]
Jul 22 11:42:40.688: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-4jjrq exposes endpoints map[] (1.018877834s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:42:40.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-services-4jjrq" for this suite.
Jul 22 11:43:02.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:43:02.837: INFO: namespace: e2e-tests-services-4jjrq, resource: bindings, ignored listing per whitelist
Jul 22 11:43:02.857: INFO: namespace e2e-tests-services-4jjrq deletion completed in 22.090494551s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90

• [SLOW TEST:32.739 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:43:02.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test use defaults
Jul 22 11:43:02.990: INFO: Waiting up to 5m0s for pod "client-containers-8027dcbf-cc10-11ea-aa05-0242ac11000b" in namespace "e2e-tests-containers-phnkc" to be "success or failure"
Jul 22 11:43:03.008: INFO: Pod "client-containers-8027dcbf-cc10-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.867993ms
Jul 22 11:43:05.040: INFO: Pod "client-containers-8027dcbf-cc10-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049992453s
Jul 22 11:43:07.044: INFO: Pod "client-containers-8027dcbf-cc10-11ea-aa05-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.053604089s
Jul 22 11:43:09.049: INFO: Pod "client-containers-8027dcbf-cc10-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058488804s
STEP: Saw pod success
Jul 22 11:43:09.049: INFO: Pod "client-containers-8027dcbf-cc10-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:43:09.052: INFO: Trying to get logs from node hunter-worker2 pod client-containers-8027dcbf-cc10-11ea-aa05-0242ac11000b container test-container: 
STEP: delete the pod
Jul 22 11:43:09.089: INFO: Waiting for pod client-containers-8027dcbf-cc10-11ea-aa05-0242ac11000b to disappear
Jul 22 11:43:09.097: INFO: Pod client-containers-8027dcbf-cc10-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:43:09.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-phnkc" for this suite.
Jul 22 11:43:15.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:43:15.140: INFO: namespace: e2e-tests-containers-phnkc, resource: bindings, ignored listing per whitelist
Jul 22 11:43:15.189: INFO: namespace e2e-tests-containers-phnkc deletion completed in 6.088871719s

• [SLOW TEST:12.332 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:43:15.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 22 11:43:15.318: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jul 22 11:43:20.324: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul 22 11:43:20.324: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jul 22 11:43:22.328: INFO: Creating deployment "test-rollover-deployment"
Jul 22 11:43:22.335: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jul 22 11:43:24.342: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jul 22 11:43:24.348: INFO: Ensure that both replica sets have 1 created replica
Jul 22 11:43:24.354: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jul 22 11:43:24.360: INFO: Updating deployment test-rollover-deployment
Jul 22 11:43:24.360: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jul 22 11:43:26.396: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jul 22 11:43:26.403: INFO: Make sure deployment "test-rollover-deployment" is complete
Jul 22 11:43:26.409: INFO: all replica sets need to contain the pod-template-hash label
Jul 22 11:43:26.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015002, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015002, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015004, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015002, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 22 11:43:28.419: INFO: all replica sets need to contain the pod-template-hash label
Jul 22 11:43:28.419: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015002, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015002, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015008, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015002, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 22 11:43:30.418: INFO: all replica sets need to contain the pod-template-hash label
Jul 22 11:43:30.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015002, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015002, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015008, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015002, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 22 11:43:32.418: INFO: all replica sets need to contain the pod-template-hash label
Jul 22 11:43:32.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015002, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015002, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015008, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015002, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 22 11:43:34.416: INFO: all replica sets need to contain the pod-template-hash label
Jul 22 11:43:34.416: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015002, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015002, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015008, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015002, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 22 11:43:36.418: INFO: all replica sets need to contain the pod-template-hash label
Jul 22 11:43:36.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015002, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015002, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015008, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015002, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5b8479fdb6\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 22 11:43:38.418: INFO: 
Jul 22 11:43:38.418: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul 22 11:43:38.427: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-ndzqm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ndzqm/deployments/test-rollover-deployment,UID:8baf7339-cc10-11ea-b2c9-0242ac120008,ResourceVersion:2180996,Generation:2,CreationTimestamp:2020-07-22 11:43:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-07-22 11:43:22 +0000 UTC 2020-07-22 11:43:22 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-07-22 11:43:38 +0000 UTC 2020-07-22 11:43:22 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-5b8479fdb6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jul 22 11:43:38.429: INFO: New ReplicaSet "test-rollover-deployment-5b8479fdb6" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6,GenerateName:,Namespace:e2e-tests-deployment-ndzqm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ndzqm/replicasets/test-rollover-deployment-5b8479fdb6,UID:8ce59690-cc10-11ea-b2c9-0242ac120008,ResourceVersion:2180987,Generation:2,CreationTimestamp:2020-07-22 11:43:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 8baf7339-cc10-11ea-b2c9-0242ac120008 0xc000b074d7 0xc000b074d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jul 22 11:43:38.429: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jul 22 11:43:38.430: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-ndzqm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ndzqm/replicasets/test-rollover-controller,UID:877ccb32-cc10-11ea-b2c9-0242ac120008,ResourceVersion:2180995,Generation:2,CreationTimestamp:2020-07-22 11:43:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 8baf7339-cc10-11ea-b2c9-0242ac120008 0xc000b07307 0xc000b07308}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul 22 11:43:38.430: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-58494b7559,GenerateName:,Namespace:e2e-tests-deployment-ndzqm,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-ndzqm/replicasets/test-rollover-deployment-58494b7559,UID:8bb1ed88-cc10-11ea-b2c9-0242ac120008,ResourceVersion:2180951,Generation:2,CreationTimestamp:2020-07-22 11:43:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 8baf7339-cc10-11ea-b2c9-0242ac120008 0xc000b073c7 0xc000b073c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 58494b7559,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul 22 11:43:38.432: INFO: Pod "test-rollover-deployment-5b8479fdb6-6cb8m" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-5b8479fdb6-6cb8m,GenerateName:test-rollover-deployment-5b8479fdb6-,Namespace:e2e-tests-deployment-ndzqm,SelfLink:/api/v1/namespaces/e2e-tests-deployment-ndzqm/pods/test-rollover-deployment-5b8479fdb6-6cb8m,UID:8cf8cbb0-cc10-11ea-b2c9-0242ac120008,ResourceVersion:2180965,Generation:0,CreationTimestamp:2020-07-22 11:43:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 5b8479fdb6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-5b8479fdb6 8ce59690-cc10-11ea-b2c9-0242ac120008 0xc000938007 0xc000938008}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-sgz6s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sgz6s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-sgz6s true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000938460} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000938480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:43:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:43:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:43:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:43:24 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.50,StartTime:2020-07-22 11:43:24 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-07-22 11:43:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://63437c8383b5cbe0403182155ece313764e7bb258d6110d9e41efd47ff6d399a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:43:38.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-ndzqm" for this suite.
Jul 22 11:43:44.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:43:44.731: INFO: namespace: e2e-tests-deployment-ndzqm, resource: bindings, ignored listing per whitelist
Jul 22 11:43:44.757: INFO: namespace e2e-tests-deployment-ndzqm deletion completed in 6.321720568s

• [SLOW TEST:29.568 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:43:44.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul 22 11:43:44.879: INFO: Waiting up to 5m0s for pod "pod-991d1250-cc10-11ea-aa05-0242ac11000b" in namespace "e2e-tests-emptydir-t9nhb" to be "success or failure"
Jul 22 11:43:44.894: INFO: Pod "pod-991d1250-cc10-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.490728ms
Jul 22 11:43:46.897: INFO: Pod "pod-991d1250-cc10-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017320084s
Jul 22 11:43:48.901: INFO: Pod "pod-991d1250-cc10-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021655862s
STEP: Saw pod success
Jul 22 11:43:48.901: INFO: Pod "pod-991d1250-cc10-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:43:48.904: INFO: Trying to get logs from node hunter-worker pod pod-991d1250-cc10-11ea-aa05-0242ac11000b container test-container: 
STEP: delete the pod
Jul 22 11:43:48.950: INFO: Waiting for pod pod-991d1250-cc10-11ea-aa05-0242ac11000b to disappear
Jul 22 11:43:49.046: INFO: Pod pod-991d1250-cc10-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:43:49.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-t9nhb" for this suite.
Jul 22 11:43:55.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:43:55.082: INFO: namespace: e2e-tests-emptydir-t9nhb, resource: bindings, ignored listing per whitelist
Jul 22 11:43:55.142: INFO: namespace e2e-tests-emptydir-t9nhb deletion completed in 6.091477379s

• [SLOW TEST:10.385 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:43:55.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 22 11:43:55.298: INFO: Waiting up to 5m0s for pod "pod-9f54a6eb-cc10-11ea-aa05-0242ac11000b" in namespace "e2e-tests-emptydir-4ftwl" to be "success or failure"
Jul 22 11:43:55.302: INFO: Pod "pod-9f54a6eb-cc10-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.650942ms
Jul 22 11:43:57.370: INFO: Pod "pod-9f54a6eb-cc10-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071700864s
Jul 22 11:43:59.374: INFO: Pod "pod-9f54a6eb-cc10-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076546377s
STEP: Saw pod success
Jul 22 11:43:59.375: INFO: Pod "pod-9f54a6eb-cc10-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:43:59.378: INFO: Trying to get logs from node hunter-worker2 pod pod-9f54a6eb-cc10-11ea-aa05-0242ac11000b container test-container: 
STEP: delete the pod
Jul 22 11:43:59.565: INFO: Waiting for pod pod-9f54a6eb-cc10-11ea-aa05-0242ac11000b to disappear
Jul 22 11:43:59.569: INFO: Pod pod-9f54a6eb-cc10-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:43:59.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-4ftwl" for this suite.
Jul 22 11:44:05.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:44:05.635: INFO: namespace: e2e-tests-emptydir-4ftwl, resource: bindings, ignored listing per whitelist
Jul 22 11:44:05.664: INFO: namespace e2e-tests-emptydir-4ftwl deletion completed in 6.091600542s

• [SLOW TEST:10.522 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:44:05.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating pod
Jul 22 11:44:09.841: INFO: Pod pod-hostip-a5930bff-cc10-11ea-aa05-0242ac11000b has hostIP: 172.18.0.2
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:44:09.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-46d7j" for this suite.
Jul 22 11:44:31.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:44:31.887: INFO: namespace: e2e-tests-pods-46d7j, resource: bindings, ignored listing per whitelist
Jul 22 11:44:31.932: INFO: namespace e2e-tests-pods-46d7j deletion completed in 22.087175215s

• [SLOW TEST:26.268 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:44:31.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul 22 11:44:40.182: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 22 11:44:40.187: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 22 11:44:42.188: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 22 11:44:42.192: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 22 11:44:44.188: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 22 11:44:44.192: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 22 11:44:46.188: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 22 11:44:46.203: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 22 11:44:48.188: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 22 11:44:48.192: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 22 11:44:50.188: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 22 11:44:50.192: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 22 11:44:52.188: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 22 11:44:52.203: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 22 11:44:54.188: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 22 11:44:54.198: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 22 11:44:56.188: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 22 11:44:56.192: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 22 11:44:58.188: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 22 11:44:58.203: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 22 11:45:00.188: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 22 11:45:00.192: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 22 11:45:02.188: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 22 11:45:02.192: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 22 11:45:04.188: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 22 11:45:04.191: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 22 11:45:06.188: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 22 11:45:06.191: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 22 11:45:08.188: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 22 11:45:08.205: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:45:08.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-t2src" for this suite.
Jul 22 11:45:30.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:45:30.311: INFO: namespace: e2e-tests-container-lifecycle-hook-t2src, resource: bindings, ignored listing per whitelist
Jul 22 11:45:30.364: INFO: namespace e2e-tests-container-lifecycle-hook-t2src deletion completed in 22.154310228s

• [SLOW TEST:58.431 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:45:30.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating server pod server in namespace e2e-tests-prestop-tdq8n
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace e2e-tests-prestop-tdq8n
STEP: Deleting pre-stop pod
Jul 22 11:45:43.534: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:45:43.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-prestop-tdq8n" for this suite.
Jul 22 11:46:21.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:46:21.607: INFO: namespace: e2e-tests-prestop-tdq8n, resource: bindings, ignored listing per whitelist
Jul 22 11:46:21.668: INFO: namespace e2e-tests-prestop-tdq8n deletion completed in 38.119289525s

• [SLOW TEST:51.304 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:46:21.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
STEP: Creating a pod to test consume service account token
Jul 22 11:46:22.261: INFO: Waiting up to 5m0s for pod "pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-bv2k5" in namespace "e2e-tests-svcaccounts-4lzfj" to be "success or failure"
Jul 22 11:46:22.267: INFO: Pod "pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-bv2k5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.276437ms
Jul 22 11:46:24.270: INFO: Pod "pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-bv2k5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009904236s
Jul 22 11:46:26.274: INFO: Pod "pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-bv2k5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013196607s
Jul 22 11:46:28.278: INFO: Pod "pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-bv2k5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01742854s
Jul 22 11:46:30.283: INFO: Pod "pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-bv2k5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.022029109s
STEP: Saw pod success
Jul 22 11:46:30.283: INFO: Pod "pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-bv2k5" satisfied condition "success or failure"
Jul 22 11:46:30.285: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-bv2k5 container token-test: 
STEP: delete the pod
Jul 22 11:46:30.333: INFO: Waiting for pod pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-bv2k5 to disappear
Jul 22 11:46:30.345: INFO: Pod pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-bv2k5 no longer exists
STEP: Creating a pod to test consume service account root CA
Jul 22 11:46:30.349: INFO: Waiting up to 5m0s for pod "pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-9tqcv" in namespace "e2e-tests-svcaccounts-4lzfj" to be "success or failure"
Jul 22 11:46:30.363: INFO: Pod "pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-9tqcv": Phase="Pending", Reason="", readiness=false. Elapsed: 13.837934ms
Jul 22 11:46:32.372: INFO: Pod "pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-9tqcv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023148898s
Jul 22 11:46:34.376: INFO: Pod "pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-9tqcv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026639776s
Jul 22 11:46:36.380: INFO: Pod "pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-9tqcv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030712111s
Jul 22 11:46:38.384: INFO: Pod "pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-9tqcv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.035059837s
STEP: Saw pod success
Jul 22 11:46:38.384: INFO: Pod "pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-9tqcv" satisfied condition "success or failure"
Jul 22 11:46:38.387: INFO: Trying to get logs from node hunter-worker2 pod pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-9tqcv container root-ca-test: 
STEP: delete the pod
Jul 22 11:46:38.452: INFO: Waiting for pod pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-9tqcv to disappear
Jul 22 11:46:38.467: INFO: Pod pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-9tqcv no longer exists
STEP: Creating a pod to test consume service account namespace
Jul 22 11:46:38.471: INFO: Waiting up to 5m0s for pod "pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-mmjjj" in namespace "e2e-tests-svcaccounts-4lzfj" to be "success or failure"
Jul 22 11:46:38.486: INFO: Pod "pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-mmjjj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.552375ms
Jul 22 11:46:40.490: INFO: Pod "pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-mmjjj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018356474s
Jul 22 11:46:42.756: INFO: Pod "pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-mmjjj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.284445486s
Jul 22 11:46:44.759: INFO: Pod "pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-mmjjj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.28746788s
STEP: Saw pod success
Jul 22 11:46:44.759: INFO: Pod "pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-mmjjj" satisfied condition "success or failure"
Jul 22 11:46:44.761: INFO: Trying to get logs from node hunter-worker pod pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-mmjjj container namespace-test: 
STEP: delete the pod
Jul 22 11:46:44.844: INFO: Waiting for pod pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-mmjjj to disappear
Jul 22 11:46:44.860: INFO: Pod pod-service-account-f6edff3b-cc10-11ea-aa05-0242ac11000b-mmjjj no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:46:44.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-4lzfj" for this suite.
Jul 22 11:46:50.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:46:50.974: INFO: namespace: e2e-tests-svcaccounts-4lzfj, resource: bindings, ignored listing per whitelist
Jul 22 11:46:51.000: INFO: namespace e2e-tests-svcaccounts-4lzfj deletion completed in 6.135438091s

• [SLOW TEST:29.332 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:46:51.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name projected-secret-test-08202335-cc11-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume secrets
Jul 22 11:46:51.139: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-08245544-cc11-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-4z7rg" to be "success or failure"
Jul 22 11:46:51.142: INFO: Pod "pod-projected-secrets-08245544-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.257483ms
Jul 22 11:46:53.145: INFO: Pod "pod-projected-secrets-08245544-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006687725s
Jul 22 11:46:55.149: INFO: Pod "pod-projected-secrets-08245544-cc11-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010279171s
STEP: Saw pod success
Jul 22 11:46:55.149: INFO: Pod "pod-projected-secrets-08245544-cc11-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:46:55.152: INFO: Trying to get logs from node hunter-worker pod pod-projected-secrets-08245544-cc11-11ea-aa05-0242ac11000b container secret-volume-test: 
STEP: delete the pod
Jul 22 11:46:55.240: INFO: Waiting for pod pod-projected-secrets-08245544-cc11-11ea-aa05-0242ac11000b to disappear
Jul 22 11:46:55.259: INFO: Pod pod-projected-secrets-08245544-cc11-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:46:55.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-4z7rg" for this suite.
Jul 22 11:47:01.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:47:01.315: INFO: namespace: e2e-tests-projected-4z7rg, resource: bindings, ignored listing per whitelist
Jul 22 11:47:01.404: INFO: namespace e2e-tests-projected-4z7rg deletion completed in 6.139708881s

• [SLOW TEST:10.405 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:47:01.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:48:01.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-grj89" for this suite.
Jul 22 11:48:23.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:48:23.557: INFO: namespace: e2e-tests-container-probe-grj89, resource: bindings, ignored listing per whitelist
Jul 22 11:48:23.612: INFO: namespace e2e-tests-container-probe-grj89 deletion completed in 22.086816493s

• [SLOW TEST:82.207 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:48:23.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-3f7757bb-cc11-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume secrets
Jul 22 11:48:24.383: INFO: Waiting up to 5m0s for pod "pod-secrets-3fb1b3f5-cc11-11ea-aa05-0242ac11000b" in namespace "e2e-tests-secrets-xd6gv" to be "success or failure"
Jul 22 11:48:24.613: INFO: Pod "pod-secrets-3fb1b3f5-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 230.200246ms
Jul 22 11:48:26.617: INFO: Pod "pod-secrets-3fb1b3f5-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234091807s
Jul 22 11:48:28.621: INFO: Pod "pod-secrets-3fb1b3f5-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237809177s
Jul 22 11:48:30.625: INFO: Pod "pod-secrets-3fb1b3f5-cc11-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.241482193s
STEP: Saw pod success
Jul 22 11:48:30.625: INFO: Pod "pod-secrets-3fb1b3f5-cc11-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:48:30.627: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-3fb1b3f5-cc11-11ea-aa05-0242ac11000b container secret-volume-test: 
STEP: delete the pod
Jul 22 11:48:30.717: INFO: Waiting for pod pod-secrets-3fb1b3f5-cc11-11ea-aa05-0242ac11000b to disappear
Jul 22 11:48:30.725: INFO: Pod pod-secrets-3fb1b3f5-cc11-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:48:30.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-xd6gv" for this suite.
Jul 22 11:48:36.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:48:36.840: INFO: namespace: e2e-tests-secrets-xd6gv, resource: bindings, ignored listing per whitelist
Jul 22 11:48:36.917: INFO: namespace e2e-tests-secrets-xd6gv deletion completed in 6.188238717s

• [SLOW TEST:13.305 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:48:36.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jul 22 11:48:41.836: INFO: Successfully updated pod "labelsupdate4752ccfb-cc11-11ea-aa05-0242ac11000b"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:48:43.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-rl7jx" for this suite.
Jul 22 11:49:05.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:49:05.992: INFO: namespace: e2e-tests-downward-api-rl7jx, resource: bindings, ignored listing per whitelist
Jul 22 11:49:06.005: INFO: namespace e2e-tests-downward-api-rl7jx deletion completed in 22.136452128s

• [SLOW TEST:29.088 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:49:06.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 22 11:49:06.173: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5897e80b-cc11-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-dhnz6" to be "success or failure"
Jul 22 11:49:06.204: INFO: Pod "downwardapi-volume-5897e80b-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.610672ms
Jul 22 11:49:08.279: INFO: Pod "downwardapi-volume-5897e80b-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105858972s
Jul 22 11:49:10.284: INFO: Pod "downwardapi-volume-5897e80b-cc11-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.110713941s
STEP: Saw pod success
Jul 22 11:49:10.284: INFO: Pod "downwardapi-volume-5897e80b-cc11-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:49:10.287: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-5897e80b-cc11-11ea-aa05-0242ac11000b container client-container: 
STEP: delete the pod
Jul 22 11:49:10.321: INFO: Waiting for pod downwardapi-volume-5897e80b-cc11-11ea-aa05-0242ac11000b to disappear
Jul 22 11:49:10.324: INFO: Pod downwardapi-volume-5897e80b-cc11-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:49:10.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-dhnz6" for this suite.
Jul 22 11:49:16.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:49:16.410: INFO: namespace: e2e-tests-projected-dhnz6, resource: bindings, ignored listing per whitelist
Jul 22 11:49:16.495: INFO: namespace e2e-tests-projected-dhnz6 deletion completed in 6.167821846s

• [SLOW TEST:10.489 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:49:16.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-5ed4e95c-cc11-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 22 11:49:16.619: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5edaf9f5-cc11-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-fsdfg" to be "success or failure"
Jul 22 11:49:16.627: INFO: Pod "pod-projected-configmaps-5edaf9f5-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.631979ms
Jul 22 11:49:18.631: INFO: Pod "pod-projected-configmaps-5edaf9f5-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011773154s
Jul 22 11:49:20.635: INFO: Pod "pod-projected-configmaps-5edaf9f5-cc11-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015378531s
STEP: Saw pod success
Jul 22 11:49:20.635: INFO: Pod "pod-projected-configmaps-5edaf9f5-cc11-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:49:20.637: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-5edaf9f5-cc11-11ea-aa05-0242ac11000b container projected-configmap-volume-test: 
STEP: delete the pod
Jul 22 11:49:20.659: INFO: Waiting for pod pod-projected-configmaps-5edaf9f5-cc11-11ea-aa05-0242ac11000b to disappear
Jul 22 11:49:20.679: INFO: Pod pod-projected-configmaps-5edaf9f5-cc11-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:49:20.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-fsdfg" for this suite.
Jul 22 11:49:26.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:49:26.774: INFO: namespace: e2e-tests-projected-fsdfg, resource: bindings, ignored listing per whitelist
Jul 22 11:49:26.774: INFO: namespace e2e-tests-projected-fsdfg deletion completed in 6.090197557s

• [SLOW TEST:10.279 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:49:26.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul 22 11:49:26.909: INFO: Waiting up to 5m0s for pod "pod-64fd1d27-cc11-11ea-aa05-0242ac11000b" in namespace "e2e-tests-emptydir-ppcv8" to be "success or failure"
Jul 22 11:49:26.939: INFO: Pod "pod-64fd1d27-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 29.813469ms
Jul 22 11:49:28.943: INFO: Pod "pod-64fd1d27-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033245044s
Jul 22 11:49:30.950: INFO: Pod "pod-64fd1d27-cc11-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040219067s
STEP: Saw pod success
Jul 22 11:49:30.950: INFO: Pod "pod-64fd1d27-cc11-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:49:30.952: INFO: Trying to get logs from node hunter-worker2 pod pod-64fd1d27-cc11-11ea-aa05-0242ac11000b container test-container: 
STEP: delete the pod
Jul 22 11:49:31.084: INFO: Waiting for pod pod-64fd1d27-cc11-11ea-aa05-0242ac11000b to disappear
Jul 22 11:49:31.091: INFO: Pod pod-64fd1d27-cc11-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:49:31.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ppcv8" for this suite.
Jul 22 11:49:37.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:49:37.185: INFO: namespace: e2e-tests-emptydir-ppcv8, resource: bindings, ignored listing per whitelist
Jul 22 11:49:37.194: INFO: namespace e2e-tests-emptydir-ppcv8 deletion completed in 6.10026668s

• [SLOW TEST:10.420 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:49:37.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:49:41.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-wrapper-9f8sp" for this suite.
Jul 22 11:49:47.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:49:48.056: INFO: namespace: e2e-tests-emptydir-wrapper-9f8sp, resource: bindings, ignored listing per whitelist
Jul 22 11:49:48.061: INFO: namespace e2e-tests-emptydir-wrapper-9f8sp deletion completed in 6.115330409s

• [SLOW TEST:10.866 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:49:48.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jul 22 11:49:48.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:49:48.476: INFO: stderr: ""
Jul 22 11:49:48.476: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 22 11:49:48.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:49:48.603: INFO: stderr: ""
Jul 22 11:49:48.603: INFO: stdout: "update-demo-nautilus-smthf update-demo-nautilus-tjndj "
Jul 22 11:49:48.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-smthf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:49:48.711: INFO: stderr: ""
Jul 22 11:49:48.711: INFO: stdout: ""
Jul 22 11:49:48.711: INFO: update-demo-nautilus-smthf is created but not running
Jul 22 11:49:53.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:49:53.815: INFO: stderr: ""
Jul 22 11:49:53.815: INFO: stdout: "update-demo-nautilus-smthf update-demo-nautilus-tjndj "
Jul 22 11:49:53.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-smthf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:49:53.913: INFO: stderr: ""
Jul 22 11:49:53.913: INFO: stdout: "true"
Jul 22 11:49:53.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-smthf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:49:54.007: INFO: stderr: ""
Jul 22 11:49:54.007: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 22 11:49:54.007: INFO: validating pod update-demo-nautilus-smthf
Jul 22 11:49:54.011: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 22 11:49:54.011: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 22 11:49:54.011: INFO: update-demo-nautilus-smthf is verified up and running
Jul 22 11:49:54.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tjndj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:49:54.111: INFO: stderr: ""
Jul 22 11:49:54.111: INFO: stdout: "true"
Jul 22 11:49:54.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tjndj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:49:54.225: INFO: stderr: ""
Jul 22 11:49:54.225: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 22 11:49:54.225: INFO: validating pod update-demo-nautilus-tjndj
Jul 22 11:49:54.229: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 22 11:49:54.229: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 22 11:49:54.229: INFO: update-demo-nautilus-tjndj is verified up and running
STEP: scaling down the replication controller
Jul 22 11:49:54.231: INFO: scanned /root for discovery docs: 
Jul 22 11:49:54.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:49:55.377: INFO: stderr: ""
Jul 22 11:49:55.377: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 22 11:49:55.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:49:55.533: INFO: stderr: ""
Jul 22 11:49:55.533: INFO: stdout: "update-demo-nautilus-smthf update-demo-nautilus-tjndj "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jul 22 11:50:00.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:50:00.645: INFO: stderr: ""
Jul 22 11:50:00.645: INFO: stdout: "update-demo-nautilus-smthf update-demo-nautilus-tjndj "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jul 22 11:50:05.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:50:05.762: INFO: stderr: ""
Jul 22 11:50:05.762: INFO: stdout: "update-demo-nautilus-smthf update-demo-nautilus-tjndj "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jul 22 11:50:10.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:50:10.865: INFO: stderr: ""
Jul 22 11:50:10.866: INFO: stdout: "update-demo-nautilus-tjndj "
Jul 22 11:50:10.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tjndj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:50:10.956: INFO: stderr: ""
Jul 22 11:50:10.956: INFO: stdout: "true"
Jul 22 11:50:10.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tjndj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:50:11.051: INFO: stderr: ""
Jul 22 11:50:11.051: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 22 11:50:11.051: INFO: validating pod update-demo-nautilus-tjndj
Jul 22 11:50:11.055: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 22 11:50:11.055: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 22 11:50:11.055: INFO: update-demo-nautilus-tjndj is verified up and running
STEP: scaling up the replication controller
Jul 22 11:50:11.057: INFO: scanned /root for discovery docs: 
Jul 22 11:50:11.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:50:12.190: INFO: stderr: ""
Jul 22 11:50:12.190: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 22 11:50:12.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:50:12.293: INFO: stderr: ""
Jul 22 11:50:12.293: INFO: stdout: "update-demo-nautilus-6qvh8 update-demo-nautilus-tjndj "
Jul 22 11:50:12.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6qvh8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:50:12.386: INFO: stderr: ""
Jul 22 11:50:12.386: INFO: stdout: ""
Jul 22 11:50:12.386: INFO: update-demo-nautilus-6qvh8 is created but not running
Jul 22 11:50:17.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:50:17.493: INFO: stderr: ""
Jul 22 11:50:17.493: INFO: stdout: "update-demo-nautilus-6qvh8 update-demo-nautilus-tjndj "
Jul 22 11:50:17.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6qvh8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:50:17.584: INFO: stderr: ""
Jul 22 11:50:17.584: INFO: stdout: "true"
Jul 22 11:50:17.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6qvh8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:50:17.674: INFO: stderr: ""
Jul 22 11:50:17.674: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 22 11:50:17.674: INFO: validating pod update-demo-nautilus-6qvh8
Jul 22 11:50:17.678: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 22 11:50:17.678: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 22 11:50:17.678: INFO: update-demo-nautilus-6qvh8 is verified up and running
Jul 22 11:50:17.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tjndj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:50:17.790: INFO: stderr: ""
Jul 22 11:50:17.790: INFO: stdout: "true"
Jul 22 11:50:17.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tjndj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:50:17.889: INFO: stderr: ""
Jul 22 11:50:17.889: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 22 11:50:17.889: INFO: validating pod update-demo-nautilus-tjndj
Jul 22 11:50:17.893: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 22 11:50:17.893: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 22 11:50:17.893: INFO: update-demo-nautilus-tjndj is verified up and running
STEP: using delete to clean up resources
Jul 22 11:50:17.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:50:18.004: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 22 11:50:18.004: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul 22 11:50:18.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-mpzdz'
Jul 22 11:50:18.198: INFO: stderr: "No resources found.\n"
Jul 22 11:50:18.198: INFO: stdout: ""
Jul 22 11:50:18.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-mpzdz -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 22 11:50:18.480: INFO: stderr: ""
Jul 22 11:50:18.480: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:50:18.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-mpzdz" for this suite.
Jul 22 11:50:40.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:50:40.601: INFO: namespace: e2e-tests-kubectl-mpzdz, resource: bindings, ignored listing per whitelist
Jul 22 11:50:40.638: INFO: namespace e2e-tests-kubectl-mpzdz deletion completed in 22.153458195s

• [SLOW TEST:52.577 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:50:40.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-9102164b-cc11-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume secrets
Jul 22 11:50:40.775: INFO: Waiting up to 5m0s for pod "pod-secrets-9103d702-cc11-11ea-aa05-0242ac11000b" in namespace "e2e-tests-secrets-smg5q" to be "success or failure"
Jul 22 11:50:40.779: INFO: Pod "pod-secrets-9103d702-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098826ms
Jul 22 11:50:42.783: INFO: Pod "pod-secrets-9103d702-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007696716s
Jul 22 11:50:44.787: INFO: Pod "pod-secrets-9103d702-cc11-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011826473s
STEP: Saw pod success
Jul 22 11:50:44.787: INFO: Pod "pod-secrets-9103d702-cc11-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:50:44.790: INFO: Trying to get logs from node hunter-worker pod pod-secrets-9103d702-cc11-11ea-aa05-0242ac11000b container secret-volume-test: 
STEP: delete the pod
Jul 22 11:50:44.850: INFO: Waiting for pod pod-secrets-9103d702-cc11-11ea-aa05-0242ac11000b to disappear
Jul 22 11:50:44.875: INFO: Pod pod-secrets-9103d702-cc11-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:50:44.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-smg5q" for this suite.
Jul 22 11:50:50.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:50:50.916: INFO: namespace: e2e-tests-secrets-smg5q, resource: bindings, ignored listing per whitelist
Jul 22 11:50:50.970: INFO: namespace e2e-tests-secrets-smg5q deletion completed in 6.086961902s

• [SLOW TEST:10.331 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:50:50.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 22 11:50:51.303: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9748339d-cc11-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-2crtl" to be "success or failure"
Jul 22 11:50:51.457: INFO: Pod "downwardapi-volume-9748339d-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 153.618261ms
Jul 22 11:50:53.461: INFO: Pod "downwardapi-volume-9748339d-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15756913s
Jul 22 11:50:55.465: INFO: Pod "downwardapi-volume-9748339d-cc11-11ea-aa05-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.161985909s
Jul 22 11:50:57.472: INFO: Pod "downwardapi-volume-9748339d-cc11-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.168500597s
STEP: Saw pod success
Jul 22 11:50:57.472: INFO: Pod "downwardapi-volume-9748339d-cc11-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:50:57.474: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-9748339d-cc11-11ea-aa05-0242ac11000b container client-container: 
STEP: delete the pod
Jul 22 11:50:57.562: INFO: Waiting for pod downwardapi-volume-9748339d-cc11-11ea-aa05-0242ac11000b to disappear
Jul 22 11:50:57.571: INFO: Pod downwardapi-volume-9748339d-cc11-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:50:57.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-2crtl" for this suite.
Jul 22 11:51:03.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:51:03.694: INFO: namespace: e2e-tests-projected-2crtl, resource: bindings, ignored listing per whitelist
Jul 22 11:51:03.739: INFO: namespace e2e-tests-projected-2crtl deletion completed in 6.121778066s

• [SLOW TEST:12.770 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:51:03.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-9ec189cc-cc11-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 22 11:51:03.831: INFO: Waiting up to 5m0s for pod "pod-configmaps-9ec20f8f-cc11-11ea-aa05-0242ac11000b" in namespace "e2e-tests-configmap-l8r48" to be "success or failure"
Jul 22 11:51:03.836: INFO: Pod "pod-configmaps-9ec20f8f-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.609125ms
Jul 22 11:51:05.915: INFO: Pod "pod-configmaps-9ec20f8f-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08386492s
Jul 22 11:51:07.920: INFO: Pod "pod-configmaps-9ec20f8f-cc11-11ea-aa05-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.088482957s
Jul 22 11:51:09.924: INFO: Pod "pod-configmaps-9ec20f8f-cc11-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.092543664s
STEP: Saw pod success
Jul 22 11:51:09.924: INFO: Pod "pod-configmaps-9ec20f8f-cc11-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:51:09.927: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-9ec20f8f-cc11-11ea-aa05-0242ac11000b container configmap-volume-test: 
STEP: delete the pod
Jul 22 11:51:09.983: INFO: Waiting for pod pod-configmaps-9ec20f8f-cc11-11ea-aa05-0242ac11000b to disappear
Jul 22 11:51:09.990: INFO: Pod pod-configmaps-9ec20f8f-cc11-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:51:09.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-l8r48" for this suite.
Jul 22 11:51:16.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:51:16.081: INFO: namespace: e2e-tests-configmap-l8r48, resource: bindings, ignored listing per whitelist
Jul 22 11:51:16.135: INFO: namespace e2e-tests-configmap-l8r48 deletion completed in 6.1262238s

• [SLOW TEST:12.396 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:51:16.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override command
Jul 22 11:51:16.337: INFO: Waiting up to 5m0s for pod "client-containers-a62c9e62-cc11-11ea-aa05-0242ac11000b" in namespace "e2e-tests-containers-nkhc7" to be "success or failure"
Jul 22 11:51:16.369: INFO: Pod "client-containers-a62c9e62-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 31.500474ms
Jul 22 11:51:19.095: INFO: Pod "client-containers-a62c9e62-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.757584026s
Jul 22 11:51:21.099: INFO: Pod "client-containers-a62c9e62-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.761796145s
Jul 22 11:51:23.204: INFO: Pod "client-containers-a62c9e62-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.867009829s
Jul 22 11:51:25.329: INFO: Pod "client-containers-a62c9e62-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.991600361s
Jul 22 11:51:27.333: INFO: Pod "client-containers-a62c9e62-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.995297604s
Jul 22 11:51:29.337: INFO: Pod "client-containers-a62c9e62-cc11-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.000027755s
STEP: Saw pod success
Jul 22 11:51:29.338: INFO: Pod "client-containers-a62c9e62-cc11-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:51:29.341: INFO: Trying to get logs from node hunter-worker pod client-containers-a62c9e62-cc11-11ea-aa05-0242ac11000b container test-container: 
STEP: delete the pod
Jul 22 11:51:29.408: INFO: Waiting for pod client-containers-a62c9e62-cc11-11ea-aa05-0242ac11000b to disappear
Jul 22 11:51:29.417: INFO: Pod client-containers-a62c9e62-cc11-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:51:29.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-nkhc7" for this suite.
Jul 22 11:51:35.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:51:35.450: INFO: namespace: e2e-tests-containers-nkhc7, resource: bindings, ignored listing per whitelist
Jul 22 11:51:35.533: INFO: namespace e2e-tests-containers-nkhc7 deletion completed in 6.113386851s

• [SLOW TEST:19.398 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:51:35.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 22 11:51:35.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-7spms'
Jul 22 11:51:38.316: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 22 11:51:38.316: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
Jul 22 11:51:40.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-7spms'
Jul 22 11:51:40.457: INFO: stderr: ""
Jul 22 11:51:40.457: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:51:40.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7spms" for this suite.
Jul 22 11:51:46.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:51:46.700: INFO: namespace: e2e-tests-kubectl-7spms, resource: bindings, ignored listing per whitelist
Jul 22 11:51:46.718: INFO: namespace e2e-tests-kubectl-7spms deletion completed in 6.176904093s

• [SLOW TEST:11.184 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:51:46.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with secret that has name projected-secret-test-b8654896-cc11-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume secrets
Jul 22 11:51:46.892: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b868b592-cc11-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-vl2gx" to be "success or failure"
Jul 22 11:51:46.895: INFO: Pod "pod-projected-secrets-b868b592-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.772091ms
Jul 22 11:51:48.899: INFO: Pod "pod-projected-secrets-b868b592-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007129956s
Jul 22 11:51:50.903: INFO: Pod "pod-projected-secrets-b868b592-cc11-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010614219s
STEP: Saw pod success
Jul 22 11:51:50.903: INFO: Pod "pod-projected-secrets-b868b592-cc11-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:51:50.906: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-secrets-b868b592-cc11-11ea-aa05-0242ac11000b container projected-secret-volume-test: 
STEP: delete the pod
Jul 22 11:51:50.934: INFO: Waiting for pod pod-projected-secrets-b868b592-cc11-11ea-aa05-0242ac11000b to disappear
Jul 22 11:51:50.956: INFO: Pod pod-projected-secrets-b868b592-cc11-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:51:50.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vl2gx" for this suite.
Jul 22 11:51:57.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:51:57.026: INFO: namespace: e2e-tests-projected-vl2gx, resource: bindings, ignored listing per whitelist
Jul 22 11:51:57.084: INFO: namespace e2e-tests-projected-vl2gx deletion completed in 6.124328438s

• [SLOW TEST:10.366 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:51:57.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-be942cc6-cc11-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume secrets
Jul 22 11:51:57.246: INFO: Waiting up to 5m0s for pod "pod-secrets-be95a56d-cc11-11ea-aa05-0242ac11000b" in namespace "e2e-tests-secrets-8sxd4" to be "success or failure"
Jul 22 11:51:57.250: INFO: Pod "pod-secrets-be95a56d-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.889184ms
Jul 22 11:51:59.299: INFO: Pod "pod-secrets-be95a56d-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052974798s
Jul 22 11:52:01.303: INFO: Pod "pod-secrets-be95a56d-cc11-11ea-aa05-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.057000006s
Jul 22 11:52:03.307: INFO: Pod "pod-secrets-be95a56d-cc11-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060678208s
STEP: Saw pod success
Jul 22 11:52:03.307: INFO: Pod "pod-secrets-be95a56d-cc11-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:52:03.309: INFO: Trying to get logs from node hunter-worker pod pod-secrets-be95a56d-cc11-11ea-aa05-0242ac11000b container secret-volume-test: 
STEP: delete the pod
Jul 22 11:52:03.376: INFO: Waiting for pod pod-secrets-be95a56d-cc11-11ea-aa05-0242ac11000b to disappear
Jul 22 11:52:03.466: INFO: Pod pod-secrets-be95a56d-cc11-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:52:03.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-8sxd4" for this suite.
Jul 22 11:52:11.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:52:11.548: INFO: namespace: e2e-tests-secrets-8sxd4, resource: bindings, ignored listing per whitelist
Jul 22 11:52:11.602: INFO: namespace e2e-tests-secrets-8sxd4 deletion completed in 8.130500713s

• [SLOW TEST:14.518 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:52:11.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 22 11:52:11.787: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c739b4b3-cc11-11ea-aa05-0242ac11000b" in namespace "e2e-tests-downward-api-7pc7f" to be "success or failure"
Jul 22 11:52:11.821: INFO: Pod "downwardapi-volume-c739b4b3-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 33.944598ms
Jul 22 11:52:13.964: INFO: Pod "downwardapi-volume-c739b4b3-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177598953s
Jul 22 11:52:16.012: INFO: Pod "downwardapi-volume-c739b4b3-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.225223061s
Jul 22 11:52:18.156: INFO: Pod "downwardapi-volume-c739b4b3-cc11-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.368975127s
STEP: Saw pod success
Jul 22 11:52:18.156: INFO: Pod "downwardapi-volume-c739b4b3-cc11-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:52:18.158: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-c739b4b3-cc11-11ea-aa05-0242ac11000b container client-container: 
STEP: delete the pod
Jul 22 11:52:18.180: INFO: Waiting for pod downwardapi-volume-c739b4b3-cc11-11ea-aa05-0242ac11000b to disappear
Jul 22 11:52:18.206: INFO: Pod downwardapi-volume-c739b4b3-cc11-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:52:18.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-7pc7f" for this suite.
Jul 22 11:52:24.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:52:24.246: INFO: namespace: e2e-tests-downward-api-7pc7f, resource: bindings, ignored listing per whitelist
Jul 22 11:52:24.306: INFO: namespace e2e-tests-downward-api-7pc7f deletion completed in 6.096424544s

• [SLOW TEST:12.703 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:52:24.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 22 11:52:28.527: INFO: Waiting up to 5m0s for pod "client-envvars-d13735a6-cc11-11ea-aa05-0242ac11000b" in namespace "e2e-tests-pods-5qc7n" to be "success or failure"
Jul 22 11:52:28.530: INFO: Pod "client-envvars-d13735a6-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.543192ms
Jul 22 11:52:30.533: INFO: Pod "client-envvars-d13735a6-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00575586s
Jul 22 11:52:32.537: INFO: Pod "client-envvars-d13735a6-cc11-11ea-aa05-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.009840198s
Jul 22 11:52:34.541: INFO: Pod "client-envvars-d13735a6-cc11-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014079936s
STEP: Saw pod success
Jul 22 11:52:34.541: INFO: Pod "client-envvars-d13735a6-cc11-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:52:34.545: INFO: Trying to get logs from node hunter-worker2 pod client-envvars-d13735a6-cc11-11ea-aa05-0242ac11000b container env3cont: 
STEP: delete the pod
Jul 22 11:52:34.568: INFO: Waiting for pod client-envvars-d13735a6-cc11-11ea-aa05-0242ac11000b to disappear
Jul 22 11:52:34.586: INFO: Pod client-envvars-d13735a6-cc11-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:52:34.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-5qc7n" for this suite.
Jul 22 11:53:18.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:53:18.688: INFO: namespace: e2e-tests-pods-5qc7n, resource: bindings, ignored listing per whitelist
Jul 22 11:53:18.699: INFO: namespace e2e-tests-pods-5qc7n deletion completed in 44.109261083s

• [SLOW TEST:54.393 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:53:18.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 22 11:53:18.805: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ef34742d-cc11-11ea-aa05-0242ac11000b" in namespace "e2e-tests-downward-api-r4rzl" to be "success or failure"
Jul 22 11:53:18.809: INFO: Pod "downwardapi-volume-ef34742d-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.527297ms
Jul 22 11:53:20.813: INFO: Pod "downwardapi-volume-ef34742d-cc11-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007798364s
Jul 22 11:53:22.816: INFO: Pod "downwardapi-volume-ef34742d-cc11-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011298603s
STEP: Saw pod success
Jul 22 11:53:22.816: INFO: Pod "downwardapi-volume-ef34742d-cc11-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:53:22.819: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-ef34742d-cc11-11ea-aa05-0242ac11000b container client-container: 
STEP: delete the pod
Jul 22 11:53:22.859: INFO: Waiting for pod downwardapi-volume-ef34742d-cc11-11ea-aa05-0242ac11000b to disappear
Jul 22 11:53:22.904: INFO: Pod downwardapi-volume-ef34742d-cc11-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:53:22.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-r4rzl" for this suite.
Jul 22 11:53:28.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:53:29.304: INFO: namespace: e2e-tests-downward-api-r4rzl, resource: bindings, ignored listing per whitelist
Jul 22 11:53:29.362: INFO: namespace e2e-tests-downward-api-r4rzl deletion completed in 6.453060522s

• [SLOW TEST:10.662 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:53:29.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jul 22 11:53:29.499: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 22 11:53:29.522: INFO: Waiting for terminating namespaces to be deleted...
Jul 22 11:53:29.525: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Jul 22 11:53:29.530: INFO: kube-proxy-8wnps from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded)
Jul 22 11:53:29.530: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 22 11:53:29.530: INFO: kindnet-2w5m4 from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded)
Jul 22 11:53:29.530: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 22 11:53:29.530: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Jul 22 11:53:29.534: INFO: kube-proxy-b6f6s from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded)
Jul 22 11:53:29.534: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 22 11:53:29.534: INFO: kindnet-hpnvh from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded)
Jul 22 11:53:29.534: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-f8034c21-cc11-11ea-aa05-0242ac11000b 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-f8034c21-cc11-11ea-aa05-0242ac11000b off the node hunter-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-f8034c21-cc11-11ea-aa05-0242ac11000b
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:53:39.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-96xnn" for this suite.
Jul 22 11:53:59.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:54:00.046: INFO: namespace: e2e-tests-sched-pred-96xnn, resource: bindings, ignored listing per whitelist
Jul 22 11:54:00.067: INFO: namespace e2e-tests-sched-pred-96xnn deletion completed in 20.089585057s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:30.705 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:54:00.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wjwpx A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-wjwpx;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wjwpx A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-wjwpx;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wjwpx.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-wjwpx.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wjwpx.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-wjwpx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wjwpx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-wjwpx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wjwpx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-wjwpx.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wjwpx.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 227.109.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.109.227_udp@PTR;check="$$(dig +tcp +noall +answer +search 227.109.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.109.227_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wjwpx A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-wjwpx;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wjwpx A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-wjwpx.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-wjwpx.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-wjwpx.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wjwpx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-wjwpx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-wjwpx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-wjwpx.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-wjwpx.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 227.109.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.109.227_udp@PTR;check="$$(dig +tcp +noall +answer +search 227.109.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.109.227_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 22 11:54:08.381: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:08.389: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-wjwpx from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:08.396: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:08.415: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:08.417: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:08.419: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wjwpx from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:08.423: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:08.426: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:08.429: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:08.431: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:08.435: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:08.452: INFO: Lookups using e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-wjwpx wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wjwpx jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx jessie_udp@dns-test-service.e2e-tests-dns-wjwpx.svc jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc]

Jul 22 11:54:13.456: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:13.466: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-wjwpx from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:13.472: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:13.490: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:13.493: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:13.495: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wjwpx from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:13.498: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:13.500: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:13.502: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:13.505: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:13.508: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:13.526: INFO: Lookups using e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-wjwpx wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wjwpx jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx jessie_udp@dns-test-service.e2e-tests-dns-wjwpx.svc jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc]

Jul 22 11:54:18.456: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:18.465: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-wjwpx from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:18.475: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:18.497: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:18.500: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:18.502: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wjwpx from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:18.504: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:18.507: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:18.509: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:18.512: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:18.515: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:18.530: INFO: Lookups using e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-wjwpx wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wjwpx jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx jessie_udp@dns-test-service.e2e-tests-dns-wjwpx.svc jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc]

Jul 22 11:54:23.456: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:23.465: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-wjwpx from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:23.475: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:23.499: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:23.502: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:23.505: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wjwpx from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:23.509: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:23.512: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:23.515: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:23.518: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:23.522: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:23.541: INFO: Lookups using e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-wjwpx wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wjwpx jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx jessie_udp@dns-test-service.e2e-tests-dns-wjwpx.svc jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc]

Jul 22 11:54:28.457: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:28.468: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-wjwpx from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:28.477: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:28.498: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:28.501: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:28.503: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wjwpx from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:28.506: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:28.510: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:28.513: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:28.516: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:28.519: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:28.537: INFO: Lookups using e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-wjwpx wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wjwpx jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx jessie_udp@dns-test-service.e2e-tests-dns-wjwpx.svc jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc]

Jul 22 11:54:33.456: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:33.467: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-wjwpx from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:33.475: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:33.497: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:33.499: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:33.502: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wjwpx from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:33.505: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:33.508: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:33.511: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:33.513: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:33.516: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc from pod e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b: the server could not find the requested resource (get pods dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b)
Jul 22 11:54:33.539: INFO: Lookups using e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-wjwpx wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-wjwpx jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx jessie_udp@dns-test-service.e2e-tests-dns-wjwpx.svc jessie_tcp@dns-test-service.e2e-tests-dns-wjwpx.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-wjwpx.svc]

Jul 22 11:54:38.536: INFO: DNS probes using e2e-tests-dns-wjwpx/dns-test-07eccf45-cc12-11ea-aa05-0242ac11000b succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:54:39.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-dns-wjwpx" for this suite.
Jul 22 11:54:45.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:54:45.571: INFO: namespace: e2e-tests-dns-wjwpx, resource: bindings, ignored listing per whitelist
Jul 22 11:54:45.583: INFO: namespace e2e-tests-dns-wjwpx deletion completed in 6.173590096s

• [SLOW TEST:45.516 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:54:45.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 22 11:54:45.721: INFO: Creating deployment "nginx-deployment"
Jul 22 11:54:45.727: INFO: Waiting for observed generation 1
Jul 22 11:54:47.754: INFO: Waiting for all required pods to come up
Jul 22 11:54:47.759: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jul 22 11:54:57.769: INFO: Waiting for deployment "nginx-deployment" to complete
Jul 22 11:54:57.774: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jul 22 11:54:57.778: INFO: Updating deployment nginx-deployment
Jul 22 11:54:57.778: INFO: Waiting for observed generation 2
Jul 22 11:55:00.506: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jul 22 11:55:00.509: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jul 22 11:55:00.814: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jul 22 11:55:01.579: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jul 22 11:55:01.579: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jul 22 11:55:01.621: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jul 22 11:55:01.625: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jul 22 11:55:01.625: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jul 22 11:55:01.629: INFO: Updating deployment nginx-deployment
Jul 22 11:55:01.629: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jul 22 11:55:01.778: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jul 22 11:55:01.819: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul 22 11:55:02.077: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hbvsb/deployments/nginx-deployment,UID:230504df-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183430,Generation:3,CreationTimestamp:2020-07-22 11:54:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[{Progressing True 2020-07-22 11:54:58 +0000 UTC 2020-07-22 11:54:45 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5c98f8fb5" is progressing.} {Available False 2020-07-22 11:55:01 +0000 UTC 2020-07-22 11:55:01 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jul 22 11:55:02.170: INFO: New ReplicaSet "nginx-deployment-5c98f8fb5" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5,GenerateName:,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hbvsb/replicasets/nginx-deployment-5c98f8fb5,UID:2a34b772-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183479,Generation:3,CreationTimestamp:2020-07-22 11:54:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 230504df-cc12-11ea-b2c9-0242ac120008 0xc00241d777 0xc00241d778}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul 22 11:55:02.170: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jul 22 11:55:02.170: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d,GenerateName:,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-hbvsb/replicasets/nginx-deployment-85ddf47c5d,UID:2309ade1-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183480,Generation:3,CreationTimestamp:2020-07-22 11:54:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 230504df-cc12-11ea-b2c9-0242ac120008 0xc00241d837 0xc00241d838}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jul 22 11:55:02.236: INFO: Pod "nginx-deployment-5c98f8fb5-2zjll" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-2zjll,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-5c98f8fb5-2zjll,UID:2a3850e2-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183478,Generation:0,CreationTimestamp:2020-07-22 11:54:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2a34b772-cc12-11ea-b2c9-0242ac120008 0xc002aa2227 0xc002aa2228}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa22b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa22d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:57 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.74,StartTime:2020-07-22 11:54:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.237: INFO: Pod "nginx-deployment-5c98f8fb5-5tjp4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-5tjp4,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-5c98f8fb5-5tjp4,UID:2cbd8d03-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183483,Generation:0,CreationTimestamp:2020-07-22 11:55:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2a34b772-cc12-11ea-b2c9-0242ac120008 0xc002aa23e0 0xc002aa23e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa2530} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa2550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.237: INFO: Pod "nginx-deployment-5c98f8fb5-678sm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-678sm,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-5c98f8fb5-678sm,UID:2c97370c-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183439,Generation:0,CreationTimestamp:2020-07-22 11:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2a34b772-cc12-11ea-b2c9-0242ac120008 0xc002aa25c7 0xc002aa25c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa2660} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa2690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:01 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.237: INFO: Pod "nginx-deployment-5c98f8fb5-bgx5c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-bgx5c,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-5c98f8fb5-bgx5c,UID:2cb84bad-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183464,Generation:0,CreationTimestamp:2020-07-22 11:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2a34b772-cc12-11ea-b2c9-0242ac120008 0xc002aa2717 0xc002aa2718}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa2810} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa2830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.237: INFO: Pod "nginx-deployment-5c98f8fb5-grlh9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-grlh9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-5c98f8fb5-grlh9,UID:2a4bb422-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183408,Generation:0,CreationTimestamp:2020-07-22 11:54:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2a34b772-cc12-11ea-b2c9-0242ac120008 0xc002aa28b7 0xc002aa28b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa2930} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa2950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:57 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-07-22 11:54:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.238: INFO: Pod "nginx-deployment-5c98f8fb5-grzj8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-grzj8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-5c98f8fb5-grzj8,UID:2a3ae780-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183398,Generation:0,CreationTimestamp:2020-07-22 11:54:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2a34b772-cc12-11ea-b2c9-0242ac120008 0xc002aa2c40 0xc002aa2c41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa2cc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa2ce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:57 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-07-22 11:54:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.238: INFO: Pod "nginx-deployment-5c98f8fb5-k72jx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-k72jx,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-5c98f8fb5-k72jx,UID:2c9d6302-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183442,Generation:0,CreationTimestamp:2020-07-22 11:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2a34b772-cc12-11ea-b2c9-0242ac120008 0xc002aa2e10 0xc002aa2e11}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa2e90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa2eb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:01 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.238: INFO: Pod "nginx-deployment-5c98f8fb5-ktvf9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-ktvf9,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-5c98f8fb5-ktvf9,UID:2cb8508d-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183466,Generation:0,CreationTimestamp:2020-07-22 11:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2a34b772-cc12-11ea-b2c9-0242ac120008 0xc002aa2f27 0xc002aa2f28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa2fa0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa2fc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.238: INFO: Pod "nginx-deployment-5c98f8fb5-lwrxw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-lwrxw,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-5c98f8fb5-lwrxw,UID:2cb83ebb-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183472,Generation:0,CreationTimestamp:2020-07-22 11:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2a34b772-cc12-11ea-b2c9-0242ac120008 0xc002aa3037 0xc002aa3038}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa30b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa30d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.238: INFO: Pod "nginx-deployment-5c98f8fb5-mjkcs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-mjkcs,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-5c98f8fb5-mjkcs,UID:2c9d51f1-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183447,Generation:0,CreationTimestamp:2020-07-22 11:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2a34b772-cc12-11ea-b2c9-0242ac120008 0xc002aa3147 0xc002aa3148}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa31c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa31e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:01 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.239: INFO: Pod "nginx-deployment-5c98f8fb5-rxhn8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-rxhn8,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-5c98f8fb5-rxhn8,UID:2a507a5b-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183410,Generation:0,CreationTimestamp:2020-07-22 11:54:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2a34b772-cc12-11ea-b2c9-0242ac120008 0xc002aa3257 0xc002aa3258}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa32d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa32f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:58 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-07-22 11:54:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.239: INFO: Pod "nginx-deployment-5c98f8fb5-wfkpg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-wfkpg,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-5c98f8fb5-wfkpg,UID:2a3adc95-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183399,Generation:0,CreationTimestamp:2020-07-22 11:54:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2a34b772-cc12-11ea-b2c9-0242ac120008 0xc002aa33b0 0xc002aa33b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa3430} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa3450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:57 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:57 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2020-07-22 11:54:57 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.239: INFO: Pod "nginx-deployment-5c98f8fb5-xvlq6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5c98f8fb5-xvlq6,GenerateName:nginx-deployment-5c98f8fb5-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-5c98f8fb5-xvlq6,UID:2cb81749-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183459,Generation:0,CreationTimestamp:2020-07-22 11:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5c98f8fb5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5c98f8fb5 2a34b772-cc12-11ea-b2c9-0242ac120008 0xc002aa3510 0xc002aa3511}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa3590} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa35b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.239: INFO: Pod "nginx-deployment-85ddf47c5d-5m544" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-5m544,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-85ddf47c5d-5m544,UID:2317c7fa-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183300,Generation:0,CreationTimestamp:2020-07-22 11:54:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2309ade1-cc12-11ea-b2c9-0242ac120008 0xc002aa3627 0xc002aa3628}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa36a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa36c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:45 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:45 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.69,StartTime:2020-07-22 11:54:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-22 11:54:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9b3c2413c7d36c3b9c58cacc80d1f4210d09abf071bd48ef91d9bcab1db7f644}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.240: INFO: Pod "nginx-deployment-85ddf47c5d-6t82b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-6t82b,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-85ddf47c5d-6t82b,UID:2cb83e3d-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183462,Generation:0,CreationTimestamp:2020-07-22 11:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2309ade1-cc12-11ea-b2c9-0242ac120008 0xc002aa3797 0xc002aa3798}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa3810} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa3830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.241: INFO: Pod "nginx-deployment-85ddf47c5d-7rlsf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-7rlsf,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-85ddf47c5d-7rlsf,UID:23185506-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183313,Generation:0,CreationTimestamp:2020-07-22 11:54:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2309ade1-cc12-11ea-b2c9-0242ac120008 0xc002aa38a7 0xc002aa38a8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa3920} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa3940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:45 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:53 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:53 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:45 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.117,StartTime:2020-07-22 11:54:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-22 11:54:52 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://831797779f54822aba7a224825017028057458902295334462d25f89d9e4a154}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.241: INFO: Pod "nginx-deployment-85ddf47c5d-9dslc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-9dslc,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-85ddf47c5d-9dslc,UID:231a2183-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183339,Generation:0,CreationTimestamp:2020-07-22 11:54:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2309ade1-cc12-11ea-b2c9-0242ac120008 0xc002aa3a07 0xc002aa3a08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa3a80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa3aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:45 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.119,StartTime:2020-07-22 11:54:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-22 11:54:55 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0f28f97e2355f4d9a4e4befcde59b1e9922722c7d140f4ca47f30e5fa82c9511}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.241: INFO: Pod "nginx-deployment-85ddf47c5d-bt2gn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bt2gn,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-85ddf47c5d-bt2gn,UID:2c96a389-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183487,Generation:0,CreationTimestamp:2020-07-22 11:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2309ade1-cc12-11ea-b2c9-0242ac120008 0xc002aa3b67 0xc002aa3b68}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa3be0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa3c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:01 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2020-07-22 11:55:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.242: INFO: Pod "nginx-deployment-85ddf47c5d-bwrxj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-bwrxj,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-85ddf47c5d-bwrxj,UID:2cbd54c9-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183475,Generation:0,CreationTimestamp:2020-07-22 11:55:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2309ade1-cc12-11ea-b2c9-0242ac120008 0xc002aa3cb7 0xc002aa3cb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa3d30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa3d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.242: INFO: Pod "nginx-deployment-85ddf47c5d-cdrjk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-cdrjk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-85ddf47c5d-cdrjk,UID:2c9d528c-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183444,Generation:0,CreationTimestamp:2020-07-22 11:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2309ade1-cc12-11ea-b2c9-0242ac120008 0xc002aa3dc7 0xc002aa3dc8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa3e40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa3e60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:01 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.242: INFO: Pod "nginx-deployment-85ddf47c5d-h25fl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-h25fl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-85ddf47c5d-h25fl,UID:2cbd5482-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183474,Generation:0,CreationTimestamp:2020-07-22 11:55:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2309ade1-cc12-11ea-b2c9-0242ac120008 0xc002aa3ed7 0xc002aa3ed8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002aa3f50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002aa3f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.243: INFO: Pod "nginx-deployment-85ddf47c5d-jktbl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-jktbl,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-85ddf47c5d-jktbl,UID:2cb84bf5-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183468,Generation:0,CreationTimestamp:2020-07-22 11:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2309ade1-cc12-11ea-b2c9-0242ac120008 0xc002aa3fe7 0xc002aa3fe8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b08070} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b08090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.243: INFO: Pod "nginx-deployment-85ddf47c5d-mcv2c" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mcv2c,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-85ddf47c5d-mcv2c,UID:231a21d2-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183351,Generation:0,CreationTimestamp:2020-07-22 11:54:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2309ade1-cc12-11ea-b2c9-0242ac120008 0xc002b08177 0xc002b08178}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b08280} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b082a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:45 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.72,StartTime:2020-07-22 11:54:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-22 11:54:56 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://df173b316ae8f38dedec358b8188e895fdcf0f842063a6d9f8e85e87bf57e72a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.244: INFO: Pod "nginx-deployment-85ddf47c5d-mjjnh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mjjnh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-85ddf47c5d-mjjnh,UID:2cb84a9a-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183471,Generation:0,CreationTimestamp:2020-07-22 11:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2309ade1-cc12-11ea-b2c9-0242ac120008 0xc002b08367 0xc002b08368}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b08530} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b08550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.244: INFO: Pod "nginx-deployment-85ddf47c5d-mwmdh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-mwmdh,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-85ddf47c5d-mwmdh,UID:2cbd998b-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183482,Generation:0,CreationTimestamp:2020-07-22 11:55:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2309ade1-cc12-11ea-b2c9-0242ac120008 0xc002b085c7 0xc002b085c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b08640} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b08660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.244: INFO: Pod "nginx-deployment-85ddf47c5d-nmld9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-nmld9,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-85ddf47c5d-nmld9,UID:231a22b3-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183332,Generation:0,CreationTimestamp:2020-07-22 11:54:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2309ade1-cc12-11ea-b2c9-0242ac120008 0xc002b08ce7 0xc002b08ce8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b090e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b09100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:45 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:45 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.71,StartTime:2020-07-22 11:54:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-22 11:54:55 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://77bf084605e7be9308e5b69dab732c63177fea7338ec681fb093e61f58c35f35}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.244: INFO: Pod "nginx-deployment-85ddf47c5d-pf9jg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-pf9jg,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-85ddf47c5d-pf9jg,UID:2cbd6a36-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183481,Generation:0,CreationTimestamp:2020-07-22 11:55:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2309ade1-cc12-11ea-b2c9-0242ac120008 0xc002b091d7 0xc002b091d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b09320} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b09340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.245: INFO: Pod "nginx-deployment-85ddf47c5d-r5jww" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-r5jww,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-85ddf47c5d-r5jww,UID:231a05ab-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183344,Generation:0,CreationTimestamp:2020-07-22 11:54:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2309ade1-cc12-11ea-b2c9-0242ac120008 0xc002b093b7 0xc002b093b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b094b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b094d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:45 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:45 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.2.118,StartTime:2020-07-22 11:54:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-22 11:54:55 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://92003e031388c8230b175b81c236a78b2a2006426ea171a20c2b7953fbf09171}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.245: INFO: Pod "nginx-deployment-85ddf47c5d-rfhvk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-rfhvk,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-85ddf47c5d-rfhvk,UID:23234709-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183348,Generation:0,CreationTimestamp:2020-07-22 11:54:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2309ade1-cc12-11ea-b2c9-0242ac120008 0xc002b09597 0xc002b09598}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b09610} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b09630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:46 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:56 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:56 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:45 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.73,StartTime:2020-07-22 11:54:46 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-22 11:54:56 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0e3eb08caab6b9bf881b89d446f8397a9fe8d2f0b546d60f8b648e0e4203d23b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.245: INFO: Pod "nginx-deployment-85ddf47c5d-tjjh4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tjjh4,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-85ddf47c5d-tjjh4,UID:2c9d6535-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183454,Generation:0,CreationTimestamp:2020-07-22 11:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2309ade1-cc12-11ea-b2c9-0242ac120008 0xc002b09897 0xc002b09898}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b09910} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b09990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:01 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.248: INFO: Pod "nginx-deployment-85ddf47c5d-tpgnv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-tpgnv,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-85ddf47c5d-tpgnv,UID:231859d7-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183320,Generation:0,CreationTimestamp:2020-07-22 11:54:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2309ade1-cc12-11ea-b2c9-0242ac120008 0xc002b09a07 0xc002b09a08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b09a80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b09aa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:45 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:54 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:54 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:54:45 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.70,StartTime:2020-07-22 11:54:45 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-22 11:54:53 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://974e8f32e87252170d4daf6cb8d1b2d66c11969efd75fdcc37ed649b42ff9cb9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.248: INFO: Pod "nginx-deployment-85ddf47c5d-vrwvp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-vrwvp,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-85ddf47c5d-vrwvp,UID:2cb84e14-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183470,Generation:0,CreationTimestamp:2020-07-22 11:55:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2309ade1-cc12-11ea-b2c9-0242ac120008 0xc002b09ba7 0xc002b09ba8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b09c20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b09c40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul 22 11:55:02.248: INFO: Pod "nginx-deployment-85ddf47c5d-wp8d2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-85ddf47c5d-wp8d2,GenerateName:nginx-deployment-85ddf47c5d-,Namespace:e2e-tests-deployment-hbvsb,SelfLink:/api/v1/namespaces/e2e-tests-deployment-hbvsb/pods/nginx-deployment-85ddf47c5d-wp8d2,UID:2cbd7aa0-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2183484,Generation:0,CreationTimestamp:2020-07-22 11:55:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 85ddf47c5d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-85ddf47c5d 2309ade1-cc12-11ea-b2c9-0242ac120008 0xc002b09ce7 0xc002b09ce8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-67cfb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-67cfb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-67cfb true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002b09d60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002b09d80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:55:02 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:55:02.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-hbvsb" for this suite.
Jul 22 11:55:26.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:55:26.453: INFO: namespace: e2e-tests-deployment-hbvsb, resource: bindings, ignored listing per whitelist
Jul 22 11:55:26.487: INFO: namespace e2e-tests-deployment-hbvsb deletion completed in 24.170655561s

• [SLOW TEST:40.903 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:55:26.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jul 22 11:55:26.703: INFO: PodSpec: initContainers in spec.initContainers
Jul 22 11:56:27.689: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-3b72159f-cc12-11ea-aa05-0242ac11000b", GenerateName:"", Namespace:"e2e-tests-init-container-jt5wz", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-jt5wz/pods/pod-init-3b72159f-cc12-11ea-aa05-0242ac11000b", UID:"3b77062b-cc12-11ea-b2c9-0242ac120008", ResourceVersion:"2183932", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731015726, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"703153675"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-spwkg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0022718c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-spwkg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-spwkg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-spwkg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000b07878), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000168060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000b07900)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000b07920)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc000b07928), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000b0792c)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015727, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015727, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015727, loc:(*time.Location)(0x7950ac0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015726, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.4", PodIP:"10.244.2.136", StartTime:(*v1.Time)(0xc000b7d580), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0015da690)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0015da700)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://2879d3a6681c1f02d93b7b93fc9cc96484e451cf6a59bad75de3a42c8ccbb90d"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000b7d5c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000b7d5a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:56:27.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-jt5wz" for this suite.
Jul 22 11:56:49.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:56:49.854: INFO: namespace: e2e-tests-init-container-jt5wz, resource: bindings, ignored listing per whitelist
Jul 22 11:56:49.866: INFO: namespace e2e-tests-init-container-jt5wz deletion completed in 22.146663002s

• [SLOW TEST:83.378 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:56:49.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-mgtsf
Jul 22 11:56:54.015: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-mgtsf
STEP: checking the pod's current state and verifying that restartCount is present
Jul 22 11:56:54.018: INFO: Initial restart count of pod liveness-http is 0
Jul 22 11:57:12.242: INFO: Restart count of pod e2e-tests-container-probe-mgtsf/liveness-http is now 1 (18.223819863s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:57:12.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-mgtsf" for this suite.
Jul 22 11:57:18.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:57:18.294: INFO: namespace: e2e-tests-container-probe-mgtsf, resource: bindings, ignored listing per whitelist
Jul 22 11:57:18.348: INFO: namespace e2e-tests-container-probe-mgtsf deletion completed in 6.089172936s

• [SLOW TEST:28.482 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:57:18.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jul 22 11:57:18.467: INFO: Waiting up to 5m0s for pod "downward-api-7e0b67f1-cc12-11ea-aa05-0242ac11000b" in namespace "e2e-tests-downward-api-l56xd" to be "success or failure"
Jul 22 11:57:18.480: INFO: Pod "downward-api-7e0b67f1-cc12-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.362844ms
Jul 22 11:57:20.573: INFO: Pod "downward-api-7e0b67f1-cc12-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105710701s
Jul 22 11:57:22.577: INFO: Pod "downward-api-7e0b67f1-cc12-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109969494s
STEP: Saw pod success
Jul 22 11:57:22.578: INFO: Pod "downward-api-7e0b67f1-cc12-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:57:22.581: INFO: Trying to get logs from node hunter-worker pod downward-api-7e0b67f1-cc12-11ea-aa05-0242ac11000b container dapi-container: 
STEP: delete the pod
Jul 22 11:57:22.601: INFO: Waiting for pod downward-api-7e0b67f1-cc12-11ea-aa05-0242ac11000b to disappear
Jul 22 11:57:22.605: INFO: Pod downward-api-7e0b67f1-cc12-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:57:22.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-l56xd" for this suite.
Jul 22 11:57:28.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:57:28.684: INFO: namespace: e2e-tests-downward-api-l56xd, resource: bindings, ignored listing per whitelist
Jul 22 11:57:28.697: INFO: namespace e2e-tests-downward-api-l56xd deletion completed in 6.088250636s

• [SLOW TEST:10.349 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:57:28.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test override arguments
Jul 22 11:57:28.836: INFO: Waiting up to 5m0s for pod "client-containers-843cd976-cc12-11ea-aa05-0242ac11000b" in namespace "e2e-tests-containers-ccnxj" to be "success or failure"
Jul 22 11:57:28.840: INFO: Pod "client-containers-843cd976-cc12-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.264786ms
Jul 22 11:57:31.143: INFO: Pod "client-containers-843cd976-cc12-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.30677493s
Jul 22 11:57:33.147: INFO: Pod "client-containers-843cd976-cc12-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.31080391s
STEP: Saw pod success
Jul 22 11:57:33.147: INFO: Pod "client-containers-843cd976-cc12-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:57:33.149: INFO: Trying to get logs from node hunter-worker pod client-containers-843cd976-cc12-11ea-aa05-0242ac11000b container test-container: 
STEP: delete the pod
Jul 22 11:57:33.164: INFO: Waiting for pod client-containers-843cd976-cc12-11ea-aa05-0242ac11000b to disappear
Jul 22 11:57:33.328: INFO: Pod client-containers-843cd976-cc12-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:57:33.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-containers-ccnxj" for this suite.
Jul 22 11:57:39.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:57:39.418: INFO: namespace: e2e-tests-containers-ccnxj, resource: bindings, ignored listing per whitelist
Jul 22 11:57:39.423: INFO: namespace e2e-tests-containers-ccnxj deletion completed in 6.090941432s

• [SLOW TEST:10.726 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:57:39.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jul 22 11:57:39.532: INFO: Waiting up to 5m0s for pod "downward-api-8a9c332f-cc12-11ea-aa05-0242ac11000b" in namespace "e2e-tests-downward-api-6f997" to be "success or failure"
Jul 22 11:57:39.671: INFO: Pod "downward-api-8a9c332f-cc12-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 139.325863ms
Jul 22 11:57:41.705: INFO: Pod "downward-api-8a9c332f-cc12-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173486797s
Jul 22 11:57:43.948: INFO: Pod "downward-api-8a9c332f-cc12-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.415923533s
Jul 22 11:57:45.952: INFO: Pod "downward-api-8a9c332f-cc12-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.420280194s
STEP: Saw pod success
Jul 22 11:57:45.952: INFO: Pod "downward-api-8a9c332f-cc12-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:57:45.955: INFO: Trying to get logs from node hunter-worker2 pod downward-api-8a9c332f-cc12-11ea-aa05-0242ac11000b container dapi-container: 
STEP: delete the pod
Jul 22 11:57:45.990: INFO: Waiting for pod downward-api-8a9c332f-cc12-11ea-aa05-0242ac11000b to disappear
Jul 22 11:57:46.032: INFO: Pod downward-api-8a9c332f-cc12-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:57:46.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-6f997" for this suite.
Jul 22 11:57:52.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:57:52.149: INFO: namespace: e2e-tests-downward-api-6f997, resource: bindings, ignored listing per whitelist
Jul 22 11:57:52.186: INFO: namespace e2e-tests-downward-api-6f997 deletion completed in 6.119872742s

• [SLOW TEST:12.763 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:57:52.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jul 22 11:57:52.275: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:58:01.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-cl65x" for this suite.
Jul 22 11:58:25.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:58:25.122: INFO: namespace: e2e-tests-init-container-cl65x, resource: bindings, ignored listing per whitelist
Jul 22 11:58:25.183: INFO: namespace e2e-tests-init-container-cl65x deletion completed in 24.089297675s

• [SLOW TEST:32.997 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:58:25.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 22 11:58:25.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-7cgl2'
Jul 22 11:58:25.411: INFO: stderr: ""
Jul 22 11:58:25.411: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532
Jul 22 11:58:25.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-7cgl2'
Jul 22 11:58:31.160: INFO: stderr: ""
Jul 22 11:58:31.160: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:58:31.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7cgl2" for this suite.
Jul 22 11:58:37.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:58:37.243: INFO: namespace: e2e-tests-kubectl-7cgl2, resource: bindings, ignored listing per whitelist
Jul 22 11:58:37.246: INFO: namespace e2e-tests-kubectl-7cgl2 deletion completed in 6.082215555s

• [SLOW TEST:12.062 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:58:37.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test env composition
Jul 22 11:58:37.353: INFO: Waiting up to 5m0s for pod "var-expansion-ad12a786-cc12-11ea-aa05-0242ac11000b" in namespace "e2e-tests-var-expansion-mrbs8" to be "success or failure"
Jul 22 11:58:37.394: INFO: Pod "var-expansion-ad12a786-cc12-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 41.176217ms
Jul 22 11:58:39.485: INFO: Pod "var-expansion-ad12a786-cc12-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131909405s
Jul 22 11:58:41.489: INFO: Pod "var-expansion-ad12a786-cc12-11ea-aa05-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.135659409s
Jul 22 11:58:43.491: INFO: Pod "var-expansion-ad12a786-cc12-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.138270954s
STEP: Saw pod success
Jul 22 11:58:43.492: INFO: Pod "var-expansion-ad12a786-cc12-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:58:43.493: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-ad12a786-cc12-11ea-aa05-0242ac11000b container dapi-container: 
STEP: delete the pod
Jul 22 11:58:43.542: INFO: Waiting for pod var-expansion-ad12a786-cc12-11ea-aa05-0242ac11000b to disappear
Jul 22 11:58:43.547: INFO: Pod var-expansion-ad12a786-cc12-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:58:43.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-mrbs8" for this suite.
Jul 22 11:58:49.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:58:49.611: INFO: namespace: e2e-tests-var-expansion-mrbs8, resource: bindings, ignored listing per whitelist
Jul 22 11:58:49.630: INFO: namespace e2e-tests-var-expansion-mrbs8 deletion completed in 6.080109674s

• [SLOW TEST:12.383 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:58:49.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 22 11:58:49.781: INFO: Waiting up to 5m0s for pod "pod-b47730ee-cc12-11ea-aa05-0242ac11000b" in namespace "e2e-tests-emptydir-w9wvp" to be "success or failure"
Jul 22 11:58:49.787: INFO: Pod "pod-b47730ee-cc12-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.693229ms
Jul 22 11:58:51.790: INFO: Pod "pod-b47730ee-cc12-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008895807s
Jul 22 11:58:53.793: INFO: Pod "pod-b47730ee-cc12-11ea-aa05-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.01215199s
Jul 22 11:58:55.796: INFO: Pod "pod-b47730ee-cc12-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015452174s
STEP: Saw pod success
Jul 22 11:58:55.796: INFO: Pod "pod-b47730ee-cc12-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:58:55.798: INFO: Trying to get logs from node hunter-worker pod pod-b47730ee-cc12-11ea-aa05-0242ac11000b container test-container: 
STEP: delete the pod
Jul 22 11:58:55.839: INFO: Waiting for pod pod-b47730ee-cc12-11ea-aa05-0242ac11000b to disappear
Jul 22 11:58:55.880: INFO: Pod pod-b47730ee-cc12-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:58:55.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-w9wvp" for this suite.
Jul 22 11:59:01.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:59:02.031: INFO: namespace: e2e-tests-emptydir-w9wvp, resource: bindings, ignored listing per whitelist
Jul 22 11:59:02.036: INFO: namespace e2e-tests-emptydir-w9wvp deletion completed in 6.150772912s

• [SLOW TEST:12.406 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:59:02.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jul 22 11:59:02.161: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:59:02.163: INFO: Number of nodes with available pods: 0
Jul 22 11:59:02.163: INFO: Node hunter-worker is running more than one daemon pod
Jul 22 11:59:03.168: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:59:03.171: INFO: Number of nodes with available pods: 0
Jul 22 11:59:03.171: INFO: Node hunter-worker is running more than one daemon pod
Jul 22 11:59:04.480: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:59:04.483: INFO: Number of nodes with available pods: 0
Jul 22 11:59:04.483: INFO: Node hunter-worker is running more than one daemon pod
Jul 22 11:59:05.167: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:59:05.170: INFO: Number of nodes with available pods: 0
Jul 22 11:59:05.170: INFO: Node hunter-worker is running more than one daemon pod
Jul 22 11:59:06.474: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:59:06.477: INFO: Number of nodes with available pods: 0
Jul 22 11:59:06.477: INFO: Node hunter-worker is running more than one daemon pod
Jul 22 11:59:07.169: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:59:07.172: INFO: Number of nodes with available pods: 2
Jul 22 11:59:07.172: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jul 22 11:59:07.201: INFO: DaemonSet pods can't tolerate node hunter-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 22 11:59:07.213: INFO: Number of nodes with available pods: 2
Jul 22 11:59:07.213: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-5sc2x, will wait for the garbage collector to delete the pods
Jul 22 11:59:08.286: INFO: Deleting DaemonSet.extensions daemon-set took: 8.216872ms
Jul 22 11:59:08.786: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.22866ms
Jul 22 11:59:12.290: INFO: Number of nodes with available pods: 0
Jul 22 11:59:12.290: INFO: Number of running nodes: 0, number of available pods: 0
Jul 22 11:59:12.292: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-5sc2x/daemonsets","resourceVersion":"2184531"},"items":null}

Jul 22 11:59:12.335: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-5sc2x/pods","resourceVersion":"2184531"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:59:12.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-daemonsets-5sc2x" for this suite.
Jul 22 11:59:18.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:59:18.419: INFO: namespace: e2e-tests-daemonsets-5sc2x, resource: bindings, ignored listing per whitelist
Jul 22 11:59:18.435: INFO: namespace e2e-tests-daemonsets-5sc2x deletion completed in 6.085792502s

• [SLOW TEST:16.399 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:59:18.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-map-c5a64615-cc12-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 22 11:59:18.581: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c5a6c21b-cc12-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-c5db6" to be "success or failure"
Jul 22 11:59:18.598: INFO: Pod "pod-projected-configmaps-c5a6c21b-cc12-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.293679ms
Jul 22 11:59:20.602: INFO: Pod "pod-projected-configmaps-c5a6c21b-cc12-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020685488s
Jul 22 11:59:22.606: INFO: Pod "pod-projected-configmaps-c5a6c21b-cc12-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024626981s
STEP: Saw pod success
Jul 22 11:59:22.606: INFO: Pod "pod-projected-configmaps-c5a6c21b-cc12-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 11:59:22.608: INFO: Trying to get logs from node hunter-worker2 pod pod-projected-configmaps-c5a6c21b-cc12-11ea-aa05-0242ac11000b container projected-configmap-volume-test: 
STEP: delete the pod
Jul 22 11:59:22.645: INFO: Waiting for pod pod-projected-configmaps-c5a6c21b-cc12-11ea-aa05-0242ac11000b to disappear
Jul 22 11:59:22.651: INFO: Pod pod-projected-configmaps-c5a6c21b-cc12-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:59:22.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-c5db6" for this suite.
Jul 22 11:59:28.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:59:28.787: INFO: namespace: e2e-tests-projected-c5db6, resource: bindings, ignored listing per whitelist
Jul 22 11:59:28.791: INFO: namespace e2e-tests-projected-c5db6 deletion completed in 6.136167852s

• [SLOW TEST:10.355 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:59:28.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 22 11:59:28.879: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jul 22 11:59:28.898: INFO: Pod name sample-pod: Found 0 pods out of 1
Jul 22 11:59:33.903: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul 22 11:59:33.903: INFO: Creating deployment "test-rolling-update-deployment"
Jul 22 11:59:33.908: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jul 22 11:59:33.917: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jul 22 11:59:35.925: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jul 22 11:59:35.928: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015973, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015973, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015974, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015973, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 22 11:59:37.933: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015973, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015973, loc:(*time.Location)(0x7950ac0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015974, loc:(*time.Location)(0x7950ac0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731015973, loc:(*time.Location)(0x7950ac0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-75db98fb4c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 22 11:59:39.933: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
Jul 22 11:59:39.941: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-fls6h,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fls6h/deployments/test-rolling-update-deployment,UID:ceca188b-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2184674,Generation:1,CreationTimestamp:2020-07-22 11:59:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-07-22 11:59:33 +0000 UTC 2020-07-22 11:59:33 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-07-22 11:59:38 +0000 UTC 2020-07-22 11:59:33 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-75db98fb4c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jul 22 11:59:39.943: INFO: New ReplicaSet "test-rolling-update-deployment-75db98fb4c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c,GenerateName:,Namespace:e2e-tests-deployment-fls6h,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fls6h/replicasets/test-rolling-update-deployment-75db98fb4c,UID:cecce5b8-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2184664,Generation:1,CreationTimestamp:2020-07-22 11:59:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ceca188b-cc12-11ea-b2c9-0242ac120008 0xc0029c8227 0xc0029c8228}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jul 22 11:59:39.943: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jul 22 11:59:39.944: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-fls6h,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-fls6h/replicasets/test-rolling-update-controller,UID:cbcb6842-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2184672,Generation:2,CreationTimestamp:2020-07-22 11:59:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment ceca188b-cc12-11ea-b2c9-0242ac120008 0xc0029c8167 0xc0029c8168}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul 22 11:59:39.946: INFO: Pod "test-rolling-update-deployment-75db98fb4c-8q6rx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-75db98fb4c-8q6rx,GenerateName:test-rolling-update-deployment-75db98fb4c-,Namespace:e2e-tests-deployment-fls6h,SelfLink:/api/v1/namespaces/e2e-tests-deployment-fls6h/pods/test-rolling-update-deployment-75db98fb4c-8q6rx,UID:cecd8633-cc12-11ea-b2c9-0242ac120008,ResourceVersion:2184663,Generation:0,CreationTimestamp:2020-07-22 11:59:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 75db98fb4c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-75db98fb4c cecce5b8-cc12-11ea-b2c9-0242ac120008 0xc0026f5867 0xc0026f5868}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-grltd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-grltd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-grltd true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:hunter-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026f58e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026f5900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:59:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:59:38 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:59:38 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 11:59:33 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.2,PodIP:10.244.1.92,StartTime:2020-07-22 11:59:33 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-07-22 11:59:37 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://32cf039cd02fd8cf029776525f4dbdd52457486813bfc236a737b3b246f871d9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:59:39.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-deployment-fls6h" for this suite.
Jul 22 11:59:47.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 11:59:48.027: INFO: namespace: e2e-tests-deployment-fls6h, resource: bindings, ignored listing per whitelist
Jul 22 11:59:48.045: INFO: namespace e2e-tests-deployment-fls6h deletion completed in 8.095302994s

• [SLOW TEST:19.254 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 11:59:48.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jul 22 11:59:58.405: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tb2vq PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 22 11:59:58.406: INFO: >>> kubeConfig: /root/.kube/config
I0722 11:59:58.437574       7 log.go:172] (0xc000508d10) (0xc001712fa0) Create stream
I0722 11:59:58.437607       7 log.go:172] (0xc000508d10) (0xc001712fa0) Stream added, broadcasting: 1
I0722 11:59:58.439685       7 log.go:172] (0xc000508d10) Reply frame received for 1
I0722 11:59:58.439724       7 log.go:172] (0xc000508d10) (0xc0027ae000) Create stream
I0722 11:59:58.439738       7 log.go:172] (0xc000508d10) (0xc0027ae000) Stream added, broadcasting: 3
I0722 11:59:58.440564       7 log.go:172] (0xc000508d10) Reply frame received for 3
I0722 11:59:58.440600       7 log.go:172] (0xc000508d10) (0xc0027aa000) Create stream
I0722 11:59:58.440614       7 log.go:172] (0xc000508d10) (0xc0027aa000) Stream added, broadcasting: 5
I0722 11:59:58.441703       7 log.go:172] (0xc000508d10) Reply frame received for 5
I0722 11:59:58.506373       7 log.go:172] (0xc000508d10) Data frame received for 5
I0722 11:59:58.506414       7 log.go:172] (0xc0027aa000) (5) Data frame handling
I0722 11:59:58.506443       7 log.go:172] (0xc000508d10) Data frame received for 3
I0722 11:59:58.506459       7 log.go:172] (0xc0027ae000) (3) Data frame handling
I0722 11:59:58.506474       7 log.go:172] (0xc0027ae000) (3) Data frame sent
I0722 11:59:58.506485       7 log.go:172] (0xc000508d10) Data frame received for 3
I0722 11:59:58.506503       7 log.go:172] (0xc0027ae000) (3) Data frame handling
I0722 11:59:58.508487       7 log.go:172] (0xc000508d10) Data frame received for 1
I0722 11:59:58.508510       7 log.go:172] (0xc001712fa0) (1) Data frame handling
I0722 11:59:58.508532       7 log.go:172] (0xc001712fa0) (1) Data frame sent
I0722 11:59:58.508641       7 log.go:172] (0xc000508d10) (0xc001712fa0) Stream removed, broadcasting: 1
I0722 11:59:58.508671       7 log.go:172] (0xc000508d10) Go away received
I0722 11:59:58.508955       7 log.go:172] (0xc000508d10) (0xc001712fa0) Stream removed, broadcasting: 1
I0722 11:59:58.508999       7 log.go:172] (0xc000508d10) (0xc0027ae000) Stream removed, broadcasting: 3
I0722 11:59:58.509030       7 log.go:172] (0xc000508d10) (0xc0027aa000) Stream removed, broadcasting: 5
Jul 22 11:59:58.509: INFO: Exec stderr: ""
Jul 22 11:59:58.509: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tb2vq PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 22 11:59:58.509: INFO: >>> kubeConfig: /root/.kube/config
I0722 11:59:58.539330       7 log.go:172] (0xc0005091e0) (0xc001713220) Create stream
I0722 11:59:58.539356       7 log.go:172] (0xc0005091e0) (0xc001713220) Stream added, broadcasting: 1
I0722 11:59:58.541062       7 log.go:172] (0xc0005091e0) Reply frame received for 1
I0722 11:59:58.541089       7 log.go:172] (0xc0005091e0) (0xc0027aa0a0) Create stream
I0722 11:59:58.541100       7 log.go:172] (0xc0005091e0) (0xc0027aa0a0) Stream added, broadcasting: 3
I0722 11:59:58.541958       7 log.go:172] (0xc0005091e0) Reply frame received for 3
I0722 11:59:58.541981       7 log.go:172] (0xc0005091e0) (0xc0027aa140) Create stream
I0722 11:59:58.541991       7 log.go:172] (0xc0005091e0) (0xc0027aa140) Stream added, broadcasting: 5
I0722 11:59:58.542696       7 log.go:172] (0xc0005091e0) Reply frame received for 5
I0722 11:59:58.605796       7 log.go:172] (0xc0005091e0) Data frame received for 3
I0722 11:59:58.605828       7 log.go:172] (0xc0027aa0a0) (3) Data frame handling
I0722 11:59:58.605847       7 log.go:172] (0xc0027aa0a0) (3) Data frame sent
I0722 11:59:58.605856       7 log.go:172] (0xc0005091e0) Data frame received for 3
I0722 11:59:58.605875       7 log.go:172] (0xc0005091e0) Data frame received for 5
I0722 11:59:58.605918       7 log.go:172] (0xc0027aa140) (5) Data frame handling
I0722 11:59:58.605954       7 log.go:172] (0xc0027aa0a0) (3) Data frame handling
I0722 11:59:58.607276       7 log.go:172] (0xc0005091e0) Data frame received for 1
I0722 11:59:58.607290       7 log.go:172] (0xc001713220) (1) Data frame handling
I0722 11:59:58.607301       7 log.go:172] (0xc001713220) (1) Data frame sent
I0722 11:59:58.607318       7 log.go:172] (0xc0005091e0) (0xc001713220) Stream removed, broadcasting: 1
I0722 11:59:58.607418       7 log.go:172] (0xc0005091e0) Go away received
I0722 11:59:58.607451       7 log.go:172] (0xc0005091e0) (0xc001713220) Stream removed, broadcasting: 1
I0722 11:59:58.607497       7 log.go:172] (0xc0005091e0) (0xc0027aa0a0) Stream removed, broadcasting: 3
I0722 11:59:58.607519       7 log.go:172] (0xc0005091e0) (0xc0027aa140) Stream removed, broadcasting: 5
Jul 22 11:59:58.607: INFO: Exec stderr: ""
Jul 22 11:59:58.607: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tb2vq PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 22 11:59:58.607: INFO: >>> kubeConfig: /root/.kube/config
I0722 11:59:58.640962       7 log.go:172] (0xc0005096b0) (0xc0017135e0) Create stream
I0722 11:59:58.641004       7 log.go:172] (0xc0005096b0) (0xc0017135e0) Stream added, broadcasting: 1
I0722 11:59:58.643932       7 log.go:172] (0xc0005096b0) Reply frame received for 1
I0722 11:59:58.643982       7 log.go:172] (0xc0005096b0) (0xc001713680) Create stream
I0722 11:59:58.643997       7 log.go:172] (0xc0005096b0) (0xc001713680) Stream added, broadcasting: 3
I0722 11:59:58.644927       7 log.go:172] (0xc0005096b0) Reply frame received for 3
I0722 11:59:58.644966       7 log.go:172] (0xc0005096b0) (0xc0027ae0a0) Create stream
I0722 11:59:58.644979       7 log.go:172] (0xc0005096b0) (0xc0027ae0a0) Stream added, broadcasting: 5
I0722 11:59:58.645830       7 log.go:172] (0xc0005096b0) Reply frame received for 5
I0722 11:59:58.709920       7 log.go:172] (0xc0005096b0) Data frame received for 5
I0722 11:59:58.709988       7 log.go:172] (0xc0027ae0a0) (5) Data frame handling
I0722 11:59:58.710037       7 log.go:172] (0xc0005096b0) Data frame received for 3
I0722 11:59:58.710073       7 log.go:172] (0xc001713680) (3) Data frame handling
I0722 11:59:58.710133       7 log.go:172] (0xc001713680) (3) Data frame sent
I0722 11:59:58.710157       7 log.go:172] (0xc0005096b0) Data frame received for 3
I0722 11:59:58.710173       7 log.go:172] (0xc001713680) (3) Data frame handling
I0722 11:59:58.711543       7 log.go:172] (0xc0005096b0) Data frame received for 1
I0722 11:59:58.711572       7 log.go:172] (0xc0017135e0) (1) Data frame handling
I0722 11:59:58.711604       7 log.go:172] (0xc0017135e0) (1) Data frame sent
I0722 11:59:58.711619       7 log.go:172] (0xc0005096b0) (0xc0017135e0) Stream removed, broadcasting: 1
I0722 11:59:58.711699       7 log.go:172] (0xc0005096b0) Go away received
I0722 11:59:58.711738       7 log.go:172] (0xc0005096b0) (0xc0017135e0) Stream removed, broadcasting: 1
I0722 11:59:58.711765       7 log.go:172] (0xc0005096b0) (0xc001713680) Stream removed, broadcasting: 3
I0722 11:59:58.711777       7 log.go:172] (0xc0005096b0) (0xc0027ae0a0) Stream removed, broadcasting: 5
Jul 22 11:59:58.711: INFO: Exec stderr: ""
Jul 22 11:59:58.711: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tb2vq PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 22 11:59:58.711: INFO: >>> kubeConfig: /root/.kube/config
I0722 11:59:58.742018       7 log.go:172] (0xc000a186e0) (0xc0027ae3c0) Create stream
I0722 11:59:58.742066       7 log.go:172] (0xc000a186e0) (0xc0027ae3c0) Stream added, broadcasting: 1
I0722 11:59:58.744408       7 log.go:172] (0xc000a186e0) Reply frame received for 1
I0722 11:59:58.744445       7 log.go:172] (0xc000a186e0) (0xc0017137c0) Create stream
I0722 11:59:58.744455       7 log.go:172] (0xc000a186e0) (0xc0017137c0) Stream added, broadcasting: 3
I0722 11:59:58.745652       7 log.go:172] (0xc000a186e0) Reply frame received for 3
I0722 11:59:58.745694       7 log.go:172] (0xc000a186e0) (0xc0027ae460) Create stream
I0722 11:59:58.745707       7 log.go:172] (0xc000a186e0) (0xc0027ae460) Stream added, broadcasting: 5
I0722 11:59:58.746707       7 log.go:172] (0xc000a186e0) Reply frame received for 5
I0722 11:59:58.808080       7 log.go:172] (0xc000a186e0) Data frame received for 5
I0722 11:59:58.808133       7 log.go:172] (0xc0027ae460) (5) Data frame handling
I0722 11:59:58.808185       7 log.go:172] (0xc000a186e0) Data frame received for 3
I0722 11:59:58.808227       7 log.go:172] (0xc0017137c0) (3) Data frame handling
I0722 11:59:58.808305       7 log.go:172] (0xc0017137c0) (3) Data frame sent
I0722 11:59:58.808338       7 log.go:172] (0xc000a186e0) Data frame received for 3
I0722 11:59:58.808368       7 log.go:172] (0xc0017137c0) (3) Data frame handling
I0722 11:59:58.810357       7 log.go:172] (0xc000a186e0) Data frame received for 1
I0722 11:59:58.810391       7 log.go:172] (0xc0027ae3c0) (1) Data frame handling
I0722 11:59:58.810416       7 log.go:172] (0xc0027ae3c0) (1) Data frame sent
I0722 11:59:58.810436       7 log.go:172] (0xc000a186e0) (0xc0027ae3c0) Stream removed, broadcasting: 1
I0722 11:59:58.810548       7 log.go:172] (0xc000a186e0) (0xc0027ae3c0) Stream removed, broadcasting: 1
I0722 11:59:58.810594       7 log.go:172] (0xc000a186e0) Go away received
I0722 11:59:58.810642       7 log.go:172] (0xc000a186e0) (0xc0017137c0) Stream removed, broadcasting: 3
I0722 11:59:58.810680       7 log.go:172] (0xc000a186e0) (0xc0027ae460) Stream removed, broadcasting: 5
Jul 22 11:59:58.810: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jul 22 11:59:58.810: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tb2vq PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 22 11:59:58.810: INFO: >>> kubeConfig: /root/.kube/config
I0722 11:59:58.837728       7 log.go:172] (0xc000509b80) (0xc0017139a0) Create stream
I0722 11:59:58.837764       7 log.go:172] (0xc000509b80) (0xc0017139a0) Stream added, broadcasting: 1
I0722 11:59:58.839366       7 log.go:172] (0xc000509b80) Reply frame received for 1
I0722 11:59:58.839398       7 log.go:172] (0xc000509b80) (0xc001193c20) Create stream
I0722 11:59:58.839407       7 log.go:172] (0xc000509b80) (0xc001193c20) Stream added, broadcasting: 3
I0722 11:59:58.840224       7 log.go:172] (0xc000509b80) Reply frame received for 3
I0722 11:59:58.840259       7 log.go:172] (0xc000509b80) (0xc0027aa1e0) Create stream
I0722 11:59:58.840272       7 log.go:172] (0xc000509b80) (0xc0027aa1e0) Stream added, broadcasting: 5
I0722 11:59:58.841131       7 log.go:172] (0xc000509b80) Reply frame received for 5
I0722 11:59:58.896475       7 log.go:172] (0xc000509b80) Data frame received for 5
I0722 11:59:58.896509       7 log.go:172] (0xc0027aa1e0) (5) Data frame handling
I0722 11:59:58.896535       7 log.go:172] (0xc000509b80) Data frame received for 3
I0722 11:59:58.896547       7 log.go:172] (0xc001193c20) (3) Data frame handling
I0722 11:59:58.896575       7 log.go:172] (0xc001193c20) (3) Data frame sent
I0722 11:59:58.896588       7 log.go:172] (0xc000509b80) Data frame received for 3
I0722 11:59:58.896599       7 log.go:172] (0xc001193c20) (3) Data frame handling
I0722 11:59:58.898315       7 log.go:172] (0xc000509b80) Data frame received for 1
I0722 11:59:58.898333       7 log.go:172] (0xc0017139a0) (1) Data frame handling
I0722 11:59:58.898344       7 log.go:172] (0xc0017139a0) (1) Data frame sent
I0722 11:59:58.898356       7 log.go:172] (0xc000509b80) (0xc0017139a0) Stream removed, broadcasting: 1
I0722 11:59:58.898396       7 log.go:172] (0xc000509b80) Go away received
I0722 11:59:58.898457       7 log.go:172] (0xc000509b80) (0xc0017139a0) Stream removed, broadcasting: 1
I0722 11:59:58.898480       7 log.go:172] (0xc000509b80) (0xc001193c20) Stream removed, broadcasting: 3
I0722 11:59:58.898503       7 log.go:172] (0xc000509b80) (0xc0027aa1e0) Stream removed, broadcasting: 5
Jul 22 11:59:58.898: INFO: Exec stderr: ""
Jul 22 11:59:58.898: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tb2vq PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 22 11:59:58.898: INFO: >>> kubeConfig: /root/.kube/config
I0722 11:59:58.927289       7 log.go:172] (0xc000a18dc0) (0xc0027ae640) Create stream
I0722 11:59:58.927314       7 log.go:172] (0xc000a18dc0) (0xc0027ae640) Stream added, broadcasting: 1
I0722 11:59:58.929468       7 log.go:172] (0xc000a18dc0) Reply frame received for 1
I0722 11:59:58.929511       7 log.go:172] (0xc000a18dc0) (0xc0026610e0) Create stream
I0722 11:59:58.929523       7 log.go:172] (0xc000a18dc0) (0xc0026610e0) Stream added, broadcasting: 3
I0722 11:59:58.930248       7 log.go:172] (0xc000a18dc0) Reply frame received for 3
I0722 11:59:58.930280       7 log.go:172] (0xc000a18dc0) (0xc001713ae0) Create stream
I0722 11:59:58.930286       7 log.go:172] (0xc000a18dc0) (0xc001713ae0) Stream added, broadcasting: 5
I0722 11:59:58.930984       7 log.go:172] (0xc000a18dc0) Reply frame received for 5
I0722 11:59:58.998009       7 log.go:172] (0xc000a18dc0) Data frame received for 3
I0722 11:59:58.998059       7 log.go:172] (0xc0026610e0) (3) Data frame handling
I0722 11:59:58.998081       7 log.go:172] (0xc0026610e0) (3) Data frame sent
I0722 11:59:58.998101       7 log.go:172] (0xc000a18dc0) Data frame received for 3
I0722 11:59:58.998117       7 log.go:172] (0xc0026610e0) (3) Data frame handling
I0722 11:59:58.998181       7 log.go:172] (0xc000a18dc0) Data frame received for 5
I0722 11:59:58.998235       7 log.go:172] (0xc001713ae0) (5) Data frame handling
I0722 11:59:58.999801       7 log.go:172] (0xc000a18dc0) Data frame received for 1
I0722 11:59:58.999836       7 log.go:172] (0xc0027ae640) (1) Data frame handling
I0722 11:59:58.999863       7 log.go:172] (0xc0027ae640) (1) Data frame sent
I0722 11:59:58.999893       7 log.go:172] (0xc000a18dc0) (0xc0027ae640) Stream removed, broadcasting: 1
I0722 11:59:58.999922       7 log.go:172] (0xc000a18dc0) Go away received
I0722 11:59:59.000091       7 log.go:172] (0xc000a18dc0) (0xc0027ae640) Stream removed, broadcasting: 1
I0722 11:59:59.000119       7 log.go:172] (0xc000a18dc0) (0xc0026610e0) Stream removed, broadcasting: 3
I0722 11:59:59.000140       7 log.go:172] (0xc000a18dc0) (0xc001713ae0) Stream removed, broadcasting: 5
Jul 22 11:59:59.000: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jul 22 11:59:59.000: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tb2vq PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 22 11:59:59.000: INFO: >>> kubeConfig: /root/.kube/config
I0722 11:59:59.036242       7 log.go:172] (0xc000e0a2c0) (0xc002661360) Create stream
I0722 11:59:59.036279       7 log.go:172] (0xc000e0a2c0) (0xc002661360) Stream added, broadcasting: 1
I0722 11:59:59.038616       7 log.go:172] (0xc000e0a2c0) Reply frame received for 1
I0722 11:59:59.038659       7 log.go:172] (0xc000e0a2c0) (0xc001713b80) Create stream
I0722 11:59:59.038668       7 log.go:172] (0xc000e0a2c0) (0xc001713b80) Stream added, broadcasting: 3
I0722 11:59:59.039605       7 log.go:172] (0xc000e0a2c0) Reply frame received for 3
I0722 11:59:59.039631       7 log.go:172] (0xc000e0a2c0) (0xc0027ae6e0) Create stream
I0722 11:59:59.039640       7 log.go:172] (0xc000e0a2c0) (0xc0027ae6e0) Stream added, broadcasting: 5
I0722 11:59:59.040406       7 log.go:172] (0xc000e0a2c0) Reply frame received for 5
I0722 11:59:59.100268       7 log.go:172] (0xc000e0a2c0) Data frame received for 3
I0722 11:59:59.100324       7 log.go:172] (0xc001713b80) (3) Data frame handling
I0722 11:59:59.100354       7 log.go:172] (0xc001713b80) (3) Data frame sent
I0722 11:59:59.100376       7 log.go:172] (0xc000e0a2c0) Data frame received for 3
I0722 11:59:59.100411       7 log.go:172] (0xc000e0a2c0) Data frame received for 5
I0722 11:59:59.100460       7 log.go:172] (0xc0027ae6e0) (5) Data frame handling
I0722 11:59:59.100498       7 log.go:172] (0xc001713b80) (3) Data frame handling
I0722 11:59:59.101927       7 log.go:172] (0xc000e0a2c0) Data frame received for 1
I0722 11:59:59.101960       7 log.go:172] (0xc002661360) (1) Data frame handling
I0722 11:59:59.101985       7 log.go:172] (0xc002661360) (1) Data frame sent
I0722 11:59:59.102002       7 log.go:172] (0xc000e0a2c0) (0xc002661360) Stream removed, broadcasting: 1
I0722 11:59:59.102034       7 log.go:172] (0xc000e0a2c0) Go away received
I0722 11:59:59.102194       7 log.go:172] (0xc000e0a2c0) (0xc002661360) Stream removed, broadcasting: 1
I0722 11:59:59.102232       7 log.go:172] (0xc000e0a2c0) (0xc001713b80) Stream removed, broadcasting: 3
I0722 11:59:59.102260       7 log.go:172] (0xc000e0a2c0) (0xc0027ae6e0) Stream removed, broadcasting: 5
Jul 22 11:59:59.102: INFO: Exec stderr: ""
Jul 22 11:59:59.102: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tb2vq PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 22 11:59:59.102: INFO: >>> kubeConfig: /root/.kube/config
I0722 11:59:59.136868       7 log.go:172] (0xc00232e2c0) (0xc001193f40) Create stream
I0722 11:59:59.136915       7 log.go:172] (0xc00232e2c0) (0xc001193f40) Stream added, broadcasting: 1
I0722 11:59:59.138756       7 log.go:172] (0xc00232e2c0) Reply frame received for 1
I0722 11:59:59.138805       7 log.go:172] (0xc00232e2c0) (0xc001713c20) Create stream
I0722 11:59:59.138821       7 log.go:172] (0xc00232e2c0) (0xc001713c20) Stream added, broadcasting: 3
I0722 11:59:59.139756       7 log.go:172] (0xc00232e2c0) Reply frame received for 3
I0722 11:59:59.139783       7 log.go:172] (0xc00232e2c0) (0xc002661400) Create stream
I0722 11:59:59.139800       7 log.go:172] (0xc00232e2c0) (0xc002661400) Stream added, broadcasting: 5
I0722 11:59:59.140649       7 log.go:172] (0xc00232e2c0) Reply frame received for 5
I0722 11:59:59.212510       7 log.go:172] (0xc00232e2c0) Data frame received for 5
I0722 11:59:59.212551       7 log.go:172] (0xc002661400) (5) Data frame handling
I0722 11:59:59.212573       7 log.go:172] (0xc00232e2c0) Data frame received for 3
I0722 11:59:59.212582       7 log.go:172] (0xc001713c20) (3) Data frame handling
I0722 11:59:59.212599       7 log.go:172] (0xc001713c20) (3) Data frame sent
I0722 11:59:59.212606       7 log.go:172] (0xc00232e2c0) Data frame received for 3
I0722 11:59:59.212614       7 log.go:172] (0xc001713c20) (3) Data frame handling
I0722 11:59:59.214357       7 log.go:172] (0xc00232e2c0) Data frame received for 1
I0722 11:59:59.214388       7 log.go:172] (0xc001193f40) (1) Data frame handling
I0722 11:59:59.214416       7 log.go:172] (0xc001193f40) (1) Data frame sent
I0722 11:59:59.214437       7 log.go:172] (0xc00232e2c0) (0xc001193f40) Stream removed, broadcasting: 1
I0722 11:59:59.214462       7 log.go:172] (0xc00232e2c0) Go away received
I0722 11:59:59.214539       7 log.go:172] (0xc00232e2c0) (0xc001193f40) Stream removed, broadcasting: 1
I0722 11:59:59.214556       7 log.go:172] (0xc00232e2c0) (0xc001713c20) Stream removed, broadcasting: 3
I0722 11:59:59.214566       7 log.go:172] (0xc00232e2c0) (0xc002661400) Stream removed, broadcasting: 5
Jul 22 11:59:59.214: INFO: Exec stderr: ""
Jul 22 11:59:59.214: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tb2vq PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 22 11:59:59.214: INFO: >>> kubeConfig: /root/.kube/config
I0722 11:59:59.246176       7 log.go:172] (0xc000e0a790) (0xc002661680) Create stream
I0722 11:59:59.246207       7 log.go:172] (0xc000e0a790) (0xc002661680) Stream added, broadcasting: 1
I0722 11:59:59.248996       7 log.go:172] (0xc000e0a790) Reply frame received for 1
I0722 11:59:59.249060       7 log.go:172] (0xc000e0a790) (0xc002622000) Create stream
I0722 11:59:59.249085       7 log.go:172] (0xc000e0a790) (0xc002622000) Stream added, broadcasting: 3
I0722 11:59:59.250155       7 log.go:172] (0xc000e0a790) Reply frame received for 3
I0722 11:59:59.250208       7 log.go:172] (0xc000e0a790) (0xc0026220a0) Create stream
I0722 11:59:59.250229       7 log.go:172] (0xc000e0a790) (0xc0026220a0) Stream added, broadcasting: 5
I0722 11:59:59.251163       7 log.go:172] (0xc000e0a790) Reply frame received for 5
I0722 11:59:59.319559       7 log.go:172] (0xc000e0a790) Data frame received for 3
I0722 11:59:59.319588       7 log.go:172] (0xc002622000) (3) Data frame handling
I0722 11:59:59.319610       7 log.go:172] (0xc002622000) (3) Data frame sent
I0722 11:59:59.319622       7 log.go:172] (0xc000e0a790) Data frame received for 3
I0722 11:59:59.319635       7 log.go:172] (0xc002622000) (3) Data frame handling
I0722 11:59:59.319699       7 log.go:172] (0xc000e0a790) Data frame received for 5
I0722 11:59:59.319713       7 log.go:172] (0xc0026220a0) (5) Data frame handling
I0722 11:59:59.321315       7 log.go:172] (0xc000e0a790) Data frame received for 1
I0722 11:59:59.321334       7 log.go:172] (0xc002661680) (1) Data frame handling
I0722 11:59:59.321343       7 log.go:172] (0xc002661680) (1) Data frame sent
I0722 11:59:59.321353       7 log.go:172] (0xc000e0a790) (0xc002661680) Stream removed, broadcasting: 1
I0722 11:59:59.321386       7 log.go:172] (0xc000e0a790) Go away received
I0722 11:59:59.321428       7 log.go:172] (0xc000e0a790) (0xc002661680) Stream removed, broadcasting: 1
I0722 11:59:59.321439       7 log.go:172] (0xc000e0a790) (0xc002622000) Stream removed, broadcasting: 3
I0722 11:59:59.321446       7 log.go:172] (0xc000e0a790) (0xc0026220a0) Stream removed, broadcasting: 5
Jul 22 11:59:59.321: INFO: Exec stderr: ""
Jul 22 11:59:59.321: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-tb2vq PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 22 11:59:59.321: INFO: >>> kubeConfig: /root/.kube/config
I0722 11:59:59.351712       7 log.go:172] (0xc00232e790) (0xc002622320) Create stream
I0722 11:59:59.351740       7 log.go:172] (0xc00232e790) (0xc002622320) Stream added, broadcasting: 1
I0722 11:59:59.353322       7 log.go:172] (0xc00232e790) Reply frame received for 1
I0722 11:59:59.353363       7 log.go:172] (0xc00232e790) (0xc0027ae820) Create stream
I0722 11:59:59.353373       7 log.go:172] (0xc00232e790) (0xc0027ae820) Stream added, broadcasting: 3
I0722 11:59:59.354119       7 log.go:172] (0xc00232e790) Reply frame received for 3
I0722 11:59:59.354142       7 log.go:172] (0xc00232e790) (0xc001713cc0) Create stream
I0722 11:59:59.354153       7 log.go:172] (0xc00232e790) (0xc001713cc0) Stream added, broadcasting: 5
I0722 11:59:59.354974       7 log.go:172] (0xc00232e790) Reply frame received for 5
I0722 11:59:59.411768       7 log.go:172] (0xc00232e790) Data frame received for 5
I0722 11:59:59.411818       7 log.go:172] (0xc001713cc0) (5) Data frame handling
I0722 11:59:59.411856       7 log.go:172] (0xc00232e790) Data frame received for 3
I0722 11:59:59.411876       7 log.go:172] (0xc0027ae820) (3) Data frame handling
I0722 11:59:59.411910       7 log.go:172] (0xc0027ae820) (3) Data frame sent
I0722 11:59:59.411930       7 log.go:172] (0xc00232e790) Data frame received for 3
I0722 11:59:59.411940       7 log.go:172] (0xc0027ae820) (3) Data frame handling
I0722 11:59:59.413352       7 log.go:172] (0xc00232e790) Data frame received for 1
I0722 11:59:59.413371       7 log.go:172] (0xc002622320) (1) Data frame handling
I0722 11:59:59.413381       7 log.go:172] (0xc002622320) (1) Data frame sent
I0722 11:59:59.413398       7 log.go:172] (0xc00232e790) (0xc002622320) Stream removed, broadcasting: 1
I0722 11:59:59.413417       7 log.go:172] (0xc00232e790) Go away received
I0722 11:59:59.413540       7 log.go:172] (0xc00232e790) (0xc002622320) Stream removed, broadcasting: 1
I0722 11:59:59.413561       7 log.go:172] (0xc00232e790) (0xc0027ae820) Stream removed, broadcasting: 3
I0722 11:59:59.413573       7 log.go:172] (0xc00232e790) (0xc001713cc0) Stream removed, broadcasting: 5
Jul 22 11:59:59.413: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 11:59:59.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-tb2vq" for this suite.
Jul 22 12:00:51.470: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:00:51.524: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-tb2vq, resource: bindings, ignored listing per whitelist
Jul 22 12:00:51.543: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-tb2vq deletion completed in 52.121244956s

• [SLOW TEST:63.498 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:00:51.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0722 12:01:32.122937       7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 22 12:01:32.123: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:01:32.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-wdllp" for this suite.
Jul 22 12:01:46.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:01:46.204: INFO: namespace: e2e-tests-gc-wdllp, resource: bindings, ignored listing per whitelist
Jul 22 12:01:46.217: INFO: namespace e2e-tests-gc-wdllp deletion completed in 14.091220707s

• [SLOW TEST:54.673 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:01:46.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating the pod
Jul 22 12:01:51.679: INFO: Successfully updated pod "annotationupdate1e0e30a8-cc13-11ea-aa05-0242ac11000b"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:01:55.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-8gxx5" for this suite.
Jul 22 12:02:17.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:02:17.781: INFO: namespace: e2e-tests-projected-8gxx5, resource: bindings, ignored listing per whitelist
Jul 22 12:02:17.829: INFO: namespace e2e-tests-projected-8gxx5 deletion completed in 22.104090227s

• [SLOW TEST:31.612 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:02:17.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jul 22 12:02:17.940: INFO: Pod name pod-release: Found 0 pods out of 1
Jul 22 12:02:22.945: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:02:23.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-76mxr" for this suite.
Jul 22 12:02:30.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:02:30.062: INFO: namespace: e2e-tests-replication-controller-76mxr, resource: bindings, ignored listing per whitelist
Jul 22 12:02:30.083: INFO: namespace e2e-tests-replication-controller-76mxr deletion completed in 6.110767959s

• [SLOW TEST:12.254 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:02:30.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:02:30.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-vnrg9" for this suite.
Jul 22 12:02:46.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:02:46.485: INFO: namespace: e2e-tests-pods-vnrg9, resource: bindings, ignored listing per whitelist
Jul 22 12:02:46.534: INFO: namespace e2e-tests-pods-vnrg9 deletion completed in 16.189562389s

• [SLOW TEST:16.452 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:02:46.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-sfhqg
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 22 12:02:46.655: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 22 12:03:14.871: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.152:8080/dial?request=hostName&protocol=udp&host=10.244.2.151&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-sfhqg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 22 12:03:14.871: INFO: >>> kubeConfig: /root/.kube/config
I0722 12:03:14.900230       7 log.go:172] (0xc0022542c0) (0xc002a863c0) Create stream
I0722 12:03:14.900263       7 log.go:172] (0xc0022542c0) (0xc002a863c0) Stream added, broadcasting: 1
I0722 12:03:14.902188       7 log.go:172] (0xc0022542c0) Reply frame received for 1
I0722 12:03:14.902229       7 log.go:172] (0xc0022542c0) (0xc0023170e0) Create stream
I0722 12:03:14.902250       7 log.go:172] (0xc0022542c0) (0xc0023170e0) Stream added, broadcasting: 3
I0722 12:03:14.903042       7 log.go:172] (0xc0022542c0) Reply frame received for 3
I0722 12:03:14.903074       7 log.go:172] (0xc0022542c0) (0xc0028d5400) Create stream
I0722 12:03:14.903085       7 log.go:172] (0xc0022542c0) (0xc0028d5400) Stream added, broadcasting: 5
I0722 12:03:14.903979       7 log.go:172] (0xc0022542c0) Reply frame received for 5
I0722 12:03:14.990710       7 log.go:172] (0xc0022542c0) Data frame received for 3
I0722 12:03:14.990760       7 log.go:172] (0xc0023170e0) (3) Data frame handling
I0722 12:03:14.990786       7 log.go:172] (0xc0023170e0) (3) Data frame sent
I0722 12:03:14.991639       7 log.go:172] (0xc0022542c0) Data frame received for 3
I0722 12:03:14.991667       7 log.go:172] (0xc0023170e0) (3) Data frame handling
I0722 12:03:14.991768       7 log.go:172] (0xc0022542c0) Data frame received for 5
I0722 12:03:14.991795       7 log.go:172] (0xc0028d5400) (5) Data frame handling
I0722 12:03:14.993484       7 log.go:172] (0xc0022542c0) Data frame received for 1
I0722 12:03:14.993507       7 log.go:172] (0xc002a863c0) (1) Data frame handling
I0722 12:03:14.993517       7 log.go:172] (0xc002a863c0) (1) Data frame sent
I0722 12:03:14.993535       7 log.go:172] (0xc0022542c0) (0xc002a863c0) Stream removed, broadcasting: 1
I0722 12:03:14.993592       7 log.go:172] (0xc0022542c0) Go away received
I0722 12:03:14.993646       7 log.go:172] (0xc0022542c0) (0xc002a863c0) Stream removed, broadcasting: 1
I0722 12:03:14.993693       7 log.go:172] (0xc0022542c0) (0xc0023170e0) Stream removed, broadcasting: 3
I0722 12:03:14.993739       7 log.go:172] (0xc0022542c0) (0xc0028d5400) Stream removed, broadcasting: 5
Jul 22 12:03:14.993: INFO: Waiting for endpoints: map[]
Jul 22 12:03:14.997: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.152:8080/dial?request=hostName&protocol=udp&host=10.244.1.100&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-sfhqg PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 22 12:03:14.997: INFO: >>> kubeConfig: /root/.kube/config
I0722 12:03:15.029404       7 log.go:172] (0xc00132a2c0) (0xc0023172c0) Create stream
I0722 12:03:15.029436       7 log.go:172] (0xc00132a2c0) (0xc0023172c0) Stream added, broadcasting: 1
I0722 12:03:15.031015       7 log.go:172] (0xc00132a2c0) Reply frame received for 1
I0722 12:03:15.031069       7 log.go:172] (0xc00132a2c0) (0xc000a1fd60) Create stream
I0722 12:03:15.031088       7 log.go:172] (0xc00132a2c0) (0xc000a1fd60) Stream added, broadcasting: 3
I0722 12:03:15.031877       7 log.go:172] (0xc00132a2c0) Reply frame received for 3
I0722 12:03:15.031911       7 log.go:172] (0xc00132a2c0) (0xc000a1fe00) Create stream
I0722 12:03:15.031922       7 log.go:172] (0xc00132a2c0) (0xc000a1fe00) Stream added, broadcasting: 5
I0722 12:03:15.032812       7 log.go:172] (0xc00132a2c0) Reply frame received for 5
I0722 12:03:15.098516       7 log.go:172] (0xc00132a2c0) Data frame received for 3
I0722 12:03:15.098567       7 log.go:172] (0xc000a1fd60) (3) Data frame handling
I0722 12:03:15.098588       7 log.go:172] (0xc000a1fd60) (3) Data frame sent
I0722 12:03:15.099318       7 log.go:172] (0xc00132a2c0) Data frame received for 3
I0722 12:03:15.099347       7 log.go:172] (0xc000a1fd60) (3) Data frame handling
I0722 12:03:15.099394       7 log.go:172] (0xc00132a2c0) Data frame received for 5
I0722 12:03:15.099413       7 log.go:172] (0xc000a1fe00) (5) Data frame handling
I0722 12:03:15.100989       7 log.go:172] (0xc00132a2c0) Data frame received for 1
I0722 12:03:15.101011       7 log.go:172] (0xc0023172c0) (1) Data frame handling
I0722 12:03:15.101024       7 log.go:172] (0xc0023172c0) (1) Data frame sent
I0722 12:03:15.101042       7 log.go:172] (0xc00132a2c0) (0xc0023172c0) Stream removed, broadcasting: 1
I0722 12:03:15.101054       7 log.go:172] (0xc00132a2c0) Go away received
I0722 12:03:15.101241       7 log.go:172] (0xc00132a2c0) (0xc0023172c0) Stream removed, broadcasting: 1
I0722 12:03:15.101268       7 log.go:172] (0xc00132a2c0) (0xc000a1fd60) Stream removed, broadcasting: 3
I0722 12:03:15.101286       7 log.go:172] (0xc00132a2c0) (0xc000a1fe00) Stream removed, broadcasting: 5
Jul 22 12:03:15.101: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:03:15.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-sfhqg" for this suite.
Jul 22 12:03:37.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:03:37.148: INFO: namespace: e2e-tests-pod-network-test-sfhqg, resource: bindings, ignored listing per whitelist
Jul 22 12:03:37.203: INFO: namespace e2e-tests-pod-network-test-sfhqg deletion completed in 22.097745464s

• [SLOW TEST:50.669 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:03:37.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name s-test-opt-del-5fdfbe97-cc13-11ea-aa05-0242ac11000b
STEP: Creating secret with name s-test-opt-upd-5fdfbf15-cc13-11ea-aa05-0242ac11000b
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-5fdfbe97-cc13-11ea-aa05-0242ac11000b
STEP: Updating secret s-test-opt-upd-5fdfbf15-cc13-11ea-aa05-0242ac11000b
STEP: Creating secret with name s-test-opt-create-5fdfbf47-cc13-11ea-aa05-0242ac11000b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:03:45.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-v4qd6" for this suite.
Jul 22 12:04:09.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:04:09.535: INFO: namespace: e2e-tests-projected-v4qd6, resource: bindings, ignored listing per whitelist
Jul 22 12:04:09.561: INFO: namespace e2e-tests-projected-v4qd6 deletion completed in 24.092125226s

• [SLOW TEST:32.357 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:04:09.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test substitution in container's command
Jul 22 12:04:09.688: INFO: Waiting up to 5m0s for pod "var-expansion-7329e3ea-cc13-11ea-aa05-0242ac11000b" in namespace "e2e-tests-var-expansion-pk86f" to be "success or failure"
Jul 22 12:04:09.692: INFO: Pod "var-expansion-7329e3ea-cc13-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.877373ms
Jul 22 12:04:11.696: INFO: Pod "var-expansion-7329e3ea-cc13-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008241959s
Jul 22 12:04:13.700: INFO: Pod "var-expansion-7329e3ea-cc13-11ea-aa05-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.012518766s
Jul 22 12:04:15.704: INFO: Pod "var-expansion-7329e3ea-cc13-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016860401s
STEP: Saw pod success
Jul 22 12:04:15.705: INFO: Pod "var-expansion-7329e3ea-cc13-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 12:04:15.708: INFO: Trying to get logs from node hunter-worker2 pod var-expansion-7329e3ea-cc13-11ea-aa05-0242ac11000b container dapi-container: 
STEP: delete the pod
Jul 22 12:04:15.734: INFO: Waiting for pod var-expansion-7329e3ea-cc13-11ea-aa05-0242ac11000b to disappear
Jul 22 12:04:15.739: INFO: Pod var-expansion-7329e3ea-cc13-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:04:15.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-var-expansion-pk86f" for this suite.
Jul 22 12:04:21.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:04:21.797: INFO: namespace: e2e-tests-var-expansion-pk86f, resource: bindings, ignored listing per whitelist
Jul 22 12:04:21.842: INFO: namespace e2e-tests-var-expansion-pk86f deletion completed in 6.099232072s

• [SLOW TEST:12.280 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:04:21.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-j74f8
Jul 22 12:04:25.965: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-j74f8
STEP: checking the pod's current state and verifying that restartCount is present
Jul 22 12:04:25.967: INFO: Initial restart count of pod liveness-exec is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:08:26.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-probe-j74f8" for this suite.
Jul 22 12:08:32.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:08:32.908: INFO: namespace: e2e-tests-container-probe-j74f8, resource: bindings, ignored listing per whitelist
Jul 22 12:08:32.978: INFO: namespace e2e-tests-container-probe-j74f8 deletion completed in 6.104808655s

• [SLOW TEST:251.136 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:08:32.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir volume type on node default medium
Jul 22 12:08:33.120: INFO: Waiting up to 5m0s for pod "pod-102ef805-cc14-11ea-aa05-0242ac11000b" in namespace "e2e-tests-emptydir-kgrc7" to be "success or failure"
Jul 22 12:08:33.130: INFO: Pod "pod-102ef805-cc14-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.095081ms
Jul 22 12:08:35.133: INFO: Pod "pod-102ef805-cc14-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013811669s
Jul 22 12:08:37.138: INFO: Pod "pod-102ef805-cc14-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018101262s
STEP: Saw pod success
Jul 22 12:08:37.138: INFO: Pod "pod-102ef805-cc14-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 12:08:37.141: INFO: Trying to get logs from node hunter-worker2 pod pod-102ef805-cc14-11ea-aa05-0242ac11000b container test-container: 
STEP: delete the pod
Jul 22 12:08:37.161: INFO: Waiting for pod pod-102ef805-cc14-11ea-aa05-0242ac11000b to disappear
Jul 22 12:08:37.166: INFO: Pod pod-102ef805-cc14-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:08:37.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-kgrc7" for this suite.
Jul 22 12:08:43.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:08:43.222: INFO: namespace: e2e-tests-emptydir-kgrc7, resource: bindings, ignored listing per whitelist
Jul 22 12:08:43.286: INFO: namespace e2e-tests-emptydir-kgrc7 deletion completed in 6.116394722s

• [SLOW TEST:10.308 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:08:43.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 22 12:08:43.397: INFO: Waiting up to 5m0s for pod "downwardapi-volume-164df247-cc14-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-vxzw7" to be "success or failure"
Jul 22 12:08:43.408: INFO: Pod "downwardapi-volume-164df247-cc14-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.410725ms
Jul 22 12:08:45.411: INFO: Pod "downwardapi-volume-164df247-cc14-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013628004s
Jul 22 12:08:47.539: INFO: Pod "downwardapi-volume-164df247-cc14-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.141707218s
STEP: Saw pod success
Jul 22 12:08:47.539: INFO: Pod "downwardapi-volume-164df247-cc14-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 12:08:47.542: INFO: Trying to get logs from node hunter-worker pod downwardapi-volume-164df247-cc14-11ea-aa05-0242ac11000b container client-container: 
STEP: delete the pod
Jul 22 12:08:47.588: INFO: Waiting for pod downwardapi-volume-164df247-cc14-11ea-aa05-0242ac11000b to disappear
Jul 22 12:08:47.599: INFO: Pod downwardapi-volume-164df247-cc14-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:08:47.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-vxzw7" for this suite.
Jul 22 12:08:53.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:08:53.673: INFO: namespace: e2e-tests-projected-vxzw7, resource: bindings, ignored listing per whitelist
Jul 22 12:08:53.693: INFO: namespace e2e-tests-projected-vxzw7 deletion completed in 6.090714574s

• [SLOW TEST:10.406 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:08:53.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
Jul 22 12:08:53.776: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 22 12:08:53.830: INFO: Waiting for terminating namespaces to be deleted...
Jul 22 12:08:53.832: INFO: 
Logging pods the kubelet thinks is on node hunter-worker before test
Jul 22 12:08:53.837: INFO: kindnet-2w5m4 from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded)
Jul 22 12:08:53.837: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 22 12:08:53.837: INFO: kube-proxy-8wnps from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded)
Jul 22 12:08:53.837: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 22 12:08:53.837: INFO: 
Logging pods the kubelet thinks is on node hunter-worker2 before test
Jul 22 12:08:53.843: INFO: kindnet-hpnvh from kube-system started at 2020-07-10 10:22:49 +0000 UTC (1 container statuses recorded)
Jul 22 12:08:53.843: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul 22 12:08:53.843: INFO: kube-proxy-b6f6s from kube-system started at 2020-07-10 10:22:48 +0000 UTC (1 container statuses recorded)
Jul 22 12:08:53.843: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162411cb8849dde4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:08:54.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-sched-pred-t7zjq" for this suite.
Jul 22 12:09:00.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:09:00.902: INFO: namespace: e2e-tests-sched-pred-t7zjq, resource: bindings, ignored listing per whitelist
Jul 22 12:09:00.968: INFO: namespace e2e-tests-sched-pred-t7zjq deletion completed in 6.099893173s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70

• [SLOW TEST:7.275 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:09:00.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-pf4pw
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StaefulSet
Jul 22 12:09:01.130: INFO: Found 0 stateful pods, waiting for 3
Jul 22 12:09:11.136: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 22 12:09:11.136: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 22 12:09:11.136: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul 22 12:09:21.136: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 22 12:09:21.136: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 22 12:09:21.136: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jul 22 12:09:21.164: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jul 22 12:09:31.217: INFO: Updating stateful set ss2
Jul 22 12:09:31.237: INFO: Waiting for Pod e2e-tests-statefulset-pf4pw/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jul 22 12:09:41.367: INFO: Found 2 stateful pods, waiting for 3
Jul 22 12:09:51.372: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 22 12:09:51.373: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 22 12:09:51.373: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jul 22 12:09:51.397: INFO: Updating stateful set ss2
Jul 22 12:09:51.410: INFO: Waiting for Pod e2e-tests-statefulset-pf4pw/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul 22 12:10:01.435: INFO: Updating stateful set ss2
Jul 22 12:10:01.463: INFO: Waiting for StatefulSet e2e-tests-statefulset-pf4pw/ss2 to complete update
Jul 22 12:10:01.463: INFO: Waiting for Pod e2e-tests-statefulset-pf4pw/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul 22 12:10:11.471: INFO: Deleting all statefulset in ns e2e-tests-statefulset-pf4pw
Jul 22 12:10:11.474: INFO: Scaling statefulset ss2 to 0
Jul 22 12:10:31.504: INFO: Waiting for statefulset status.replicas updated to 0
Jul 22 12:10:31.507: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:10:31.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-pf4pw" for this suite.
Jul 22 12:10:37.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:10:37.605: INFO: namespace: e2e-tests-statefulset-pf4pw, resource: bindings, ignored listing per whitelist
Jul 22 12:10:37.676: INFO: namespace e2e-tests-statefulset-pf4pw deletion completed in 6.14604843s

• [SLOW TEST:96.708 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:10:37.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jul 22 12:10:37.770: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:10:43.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-vfprj" for this suite.
Jul 22 12:10:49.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:10:49.423: INFO: namespace: e2e-tests-init-container-vfprj, resource: bindings, ignored listing per whitelist
Jul 22 12:10:49.468: INFO: namespace e2e-tests-init-container-vfprj deletion completed in 6.090765525s

• [SLOW TEST:11.791 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:10:49.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-618301e7-cc14-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 22 12:10:49.595: INFO: Waiting up to 5m0s for pod "pod-configmaps-6184bd8a-cc14-11ea-aa05-0242ac11000b" in namespace "e2e-tests-configmap-k2kww" to be "success or failure"
Jul 22 12:10:49.619: INFO: Pod "pod-configmaps-6184bd8a-cc14-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.604258ms
Jul 22 12:10:51.624: INFO: Pod "pod-configmaps-6184bd8a-cc14-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02813344s
Jul 22 12:10:53.628: INFO: Pod "pod-configmaps-6184bd8a-cc14-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032477717s
STEP: Saw pod success
Jul 22 12:10:53.628: INFO: Pod "pod-configmaps-6184bd8a-cc14-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 12:10:53.631: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-6184bd8a-cc14-11ea-aa05-0242ac11000b container configmap-volume-test: 
STEP: delete the pod
Jul 22 12:10:53.657: INFO: Waiting for pod pod-configmaps-6184bd8a-cc14-11ea-aa05-0242ac11000b to disappear
Jul 22 12:10:53.889: INFO: Pod pod-configmaps-6184bd8a-cc14-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:10:53.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-k2kww" for this suite.
Jul 22 12:10:59.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:10:59.961: INFO: namespace: e2e-tests-configmap-k2kww, resource: bindings, ignored listing per whitelist
Jul 22 12:10:59.993: INFO: namespace e2e-tests-configmap-k2kww deletion completed in 6.101293859s

• [SLOW TEST:10.525 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:10:59.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 22 12:11:00.113: INFO: Waiting up to 5m0s for pod "downwardapi-volume-67cae988-cc14-11ea-aa05-0242ac11000b" in namespace "e2e-tests-downward-api-mwf5j" to be "success or failure"
Jul 22 12:11:00.130: INFO: Pod "downwardapi-volume-67cae988-cc14-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.02355ms
Jul 22 12:11:02.133: INFO: Pod "downwardapi-volume-67cae988-cc14-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019481535s
Jul 22 12:11:04.137: INFO: Pod "downwardapi-volume-67cae988-cc14-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023281482s
STEP: Saw pod success
Jul 22 12:11:04.137: INFO: Pod "downwardapi-volume-67cae988-cc14-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 12:11:04.140: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-67cae988-cc14-11ea-aa05-0242ac11000b container client-container: 
STEP: delete the pod
Jul 22 12:11:04.160: INFO: Waiting for pod downwardapi-volume-67cae988-cc14-11ea-aa05-0242ac11000b to disappear
Jul 22 12:11:04.203: INFO: Pod downwardapi-volume-67cae988-cc14-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:11:04.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-mwf5j" for this suite.
Jul 22 12:11:10.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:11:10.307: INFO: namespace: e2e-tests-downward-api-mwf5j, resource: bindings, ignored listing per whitelist
Jul 22 12:11:10.311: INFO: namespace e2e-tests-downward-api-mwf5j deletion completed in 6.102809594s

• [SLOW TEST:10.317 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:11:10.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name cm-test-opt-del-6df8d61e-cc14-11ea-aa05-0242ac11000b
STEP: Creating configMap with name cm-test-opt-upd-6df8d689-cc14-11ea-aa05-0242ac11000b
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-6df8d61e-cc14-11ea-aa05-0242ac11000b
STEP: Updating configmap cm-test-opt-upd-6df8d689-cc14-11ea-aa05-0242ac11000b
STEP: Creating configMap with name cm-test-opt-create-6df8d6c8-cc14-11ea-aa05-0242ac11000b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:12:33.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-7kkkd" for this suite.
Jul 22 12:12:45.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:12:45.066: INFO: namespace: e2e-tests-configmap-7kkkd, resource: bindings, ignored listing per whitelist
Jul 22 12:12:45.127: INFO: namespace e2e-tests-configmap-7kkkd deletion completed in 12.11520597s

• [SLOW TEST:94.816 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:12:45.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-configmap-drnx
STEP: Creating a pod to test atomic-volume-subpath
Jul 22 12:12:45.259: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-drnx" in namespace "e2e-tests-subpath-z7j9t" to be "success or failure"
Jul 22 12:12:45.277: INFO: Pod "pod-subpath-test-configmap-drnx": Phase="Pending", Reason="", readiness=false. Elapsed: 18.194214ms
Jul 22 12:12:47.281: INFO: Pod "pod-subpath-test-configmap-drnx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022516799s
Jul 22 12:12:49.286: INFO: Pod "pod-subpath-test-configmap-drnx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027054717s
Jul 22 12:12:51.290: INFO: Pod "pod-subpath-test-configmap-drnx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031438902s
Jul 22 12:12:53.295: INFO: Pod "pod-subpath-test-configmap-drnx": Phase="Running", Reason="", readiness=false. Elapsed: 8.035903604s
Jul 22 12:12:55.299: INFO: Pod "pod-subpath-test-configmap-drnx": Phase="Running", Reason="", readiness=false. Elapsed: 10.03992635s
Jul 22 12:12:57.303: INFO: Pod "pod-subpath-test-configmap-drnx": Phase="Running", Reason="", readiness=false. Elapsed: 12.043630108s
Jul 22 12:12:59.307: INFO: Pod "pod-subpath-test-configmap-drnx": Phase="Running", Reason="", readiness=false. Elapsed: 14.0479573s
Jul 22 12:13:01.311: INFO: Pod "pod-subpath-test-configmap-drnx": Phase="Running", Reason="", readiness=false. Elapsed: 16.052424933s
Jul 22 12:13:03.363: INFO: Pod "pod-subpath-test-configmap-drnx": Phase="Running", Reason="", readiness=false. Elapsed: 18.10398298s
Jul 22 12:13:05.367: INFO: Pod "pod-subpath-test-configmap-drnx": Phase="Running", Reason="", readiness=false. Elapsed: 20.108140839s
Jul 22 12:13:07.372: INFO: Pod "pod-subpath-test-configmap-drnx": Phase="Running", Reason="", readiness=false. Elapsed: 22.112697677s
Jul 22 12:13:09.376: INFO: Pod "pod-subpath-test-configmap-drnx": Phase="Running", Reason="", readiness=false. Elapsed: 24.116857844s
Jul 22 12:13:11.380: INFO: Pod "pod-subpath-test-configmap-drnx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.121102538s
STEP: Saw pod success
Jul 22 12:13:11.380: INFO: Pod "pod-subpath-test-configmap-drnx" satisfied condition "success or failure"
Jul 22 12:13:11.383: INFO: Trying to get logs from node hunter-worker2 pod pod-subpath-test-configmap-drnx container test-container-subpath-configmap-drnx: 
STEP: delete the pod
Jul 22 12:13:11.423: INFO: Waiting for pod pod-subpath-test-configmap-drnx to disappear
Jul 22 12:13:11.437: INFO: Pod pod-subpath-test-configmap-drnx no longer exists
STEP: Deleting pod pod-subpath-test-configmap-drnx
Jul 22 12:13:11.437: INFO: Deleting pod "pod-subpath-test-configmap-drnx" in namespace "e2e-tests-subpath-z7j9t"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:13:11.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-z7j9t" for this suite.
Jul 22 12:13:17.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:13:17.571: INFO: namespace: e2e-tests-subpath-z7j9t, resource: bindings, ignored listing per whitelist
Jul 22 12:13:17.582: INFO: namespace e2e-tests-subpath-z7j9t deletion completed in 6.138781462s

• [SLOW TEST:32.454 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:13:17.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 22 12:13:17.690: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:13:18.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-custom-resource-definition-5cws4" for this suite.
Jul 22 12:13:24.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:13:24.811: INFO: namespace: e2e-tests-custom-resource-definition-5cws4, resource: bindings, ignored listing per whitelist
Jul 22 12:13:24.871: INFO: namespace e2e-tests-custom-resource-definition-5cws4 deletion completed in 6.115860411s

• [SLOW TEST:7.289 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:13:24.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:13:29.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-l8xqt" for this suite.
Jul 22 12:14:19.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:14:19.165: INFO: namespace: e2e-tests-kubelet-test-l8xqt, resource: bindings, ignored listing per whitelist
Jul 22 12:14:19.222: INFO: namespace e2e-tests-kubelet-test-l8xqt deletion completed in 50.162451556s

• [SLOW TEST:54.351 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186
    should not write to root filesystem [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:14:19.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jul 22 12:14:19.438: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-svrzg,SelfLink:/api/v1/namespaces/e2e-tests-watch-svrzg/configmaps/e2e-watch-test-resource-version,UID:de906c1e-cc14-11ea-b2c9-0242ac120008,ResourceVersion:2187403,Generation:0,CreationTimestamp:2020-07-22 12:14:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 22 12:14:19.439: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-svrzg,SelfLink:/api/v1/namespaces/e2e-tests-watch-svrzg/configmaps/e2e-watch-test-resource-version,UID:de906c1e-cc14-11ea-b2c9-0242ac120008,ResourceVersion:2187404,Generation:0,CreationTimestamp:2020-07-22 12:14:19 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:14:19.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-svrzg" for this suite.
Jul 22 12:14:25.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:14:25.530: INFO: namespace: e2e-tests-watch-svrzg, resource: bindings, ignored listing per whitelist
Jul 22 12:14:25.613: INFO: namespace e2e-tests-watch-svrzg deletion completed in 6.161990034s

• [SLOW TEST:6.391 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:14:25.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-2269x
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating stateful set ss in namespace e2e-tests-statefulset-2269x
STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-2269x
Jul 22 12:14:25.749: INFO: Found 0 stateful pods, waiting for 1
Jul 22 12:14:35.754: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jul 22 12:14:35.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2269x ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 22 12:14:36.072: INFO: stderr: "I0722 12:14:35.919554    2107 log.go:172] (0xc00013a630) (0xc0006fe640) Create stream\nI0722 12:14:35.919623    2107 log.go:172] (0xc00013a630) (0xc0006fe640) Stream added, broadcasting: 1\nI0722 12:14:35.922425    2107 log.go:172] (0xc00013a630) Reply frame received for 1\nI0722 12:14:35.922469    2107 log.go:172] (0xc00013a630) (0xc000496c80) Create stream\nI0722 12:14:35.922482    2107 log.go:172] (0xc00013a630) (0xc000496c80) Stream added, broadcasting: 3\nI0722 12:14:35.923537    2107 log.go:172] (0xc00013a630) Reply frame received for 3\nI0722 12:14:35.923559    2107 log.go:172] (0xc00013a630) (0xc0006fe6e0) Create stream\nI0722 12:14:35.923567    2107 log.go:172] (0xc00013a630) (0xc0006fe6e0) Stream added, broadcasting: 5\nI0722 12:14:35.924485    2107 log.go:172] (0xc00013a630) Reply frame received for 5\nI0722 12:14:36.066669    2107 log.go:172] (0xc00013a630) Data frame received for 5\nI0722 12:14:36.066716    2107 log.go:172] (0xc0006fe6e0) (5) Data frame handling\nI0722 12:14:36.066746    2107 log.go:172] (0xc00013a630) Data frame received for 3\nI0722 12:14:36.066757    2107 log.go:172] (0xc000496c80) (3) Data frame handling\nI0722 12:14:36.066769    2107 log.go:172] (0xc000496c80) (3) Data frame sent\nI0722 12:14:36.066781    2107 log.go:172] (0xc00013a630) Data frame received for 3\nI0722 12:14:36.066792    2107 log.go:172] (0xc000496c80) (3) Data frame handling\nI0722 12:14:36.068506    2107 log.go:172] (0xc00013a630) Data frame received for 1\nI0722 12:14:36.068528    2107 log.go:172] (0xc0006fe640) (1) Data frame handling\nI0722 12:14:36.068544    2107 log.go:172] (0xc0006fe640) (1) Data frame sent\nI0722 12:14:36.068556    2107 log.go:172] (0xc00013a630) (0xc0006fe640) Stream removed, broadcasting: 1\nI0722 12:14:36.068591    2107 log.go:172] (0xc00013a630) Go away received\nI0722 12:14:36.068713    2107 log.go:172] (0xc00013a630) (0xc0006fe640) Stream removed, broadcasting: 1\nI0722 12:14:36.068795    2107 log.go:172] (0xc00013a630) (0xc000496c80) Stream removed, broadcasting: 3\nI0722 12:14:36.068811    2107 log.go:172] (0xc00013a630) (0xc0006fe6e0) Stream removed, broadcasting: 5\n"
Jul 22 12:14:36.072: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 22 12:14:36.072: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 22 12:14:36.076: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul 22 12:14:46.089: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 22 12:14:46.089: INFO: Waiting for statefulset status.replicas updated to 0
Jul 22 12:14:46.105: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 22 12:14:46.105: INFO: ss-0  hunter-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  }]
Jul 22 12:14:46.105: INFO: 
Jul 22 12:14:46.105: INFO: StatefulSet ss has not reached scale 3, at 1
Jul 22 12:14:47.167: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993184726s
Jul 22 12:14:48.184: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.931526094s
Jul 22 12:14:49.189: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.913604969s
Jul 22 12:14:50.200: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.909057148s
Jul 22 12:14:51.206: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.898120684s
Jul 22 12:14:52.211: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.892556905s
Jul 22 12:14:53.245: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.887758926s
Jul 22 12:14:54.249: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.8538321s
Jul 22 12:14:55.255: INFO: Verifying statefulset ss doesn't scale past 3 for another 849.185427ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-2269x
Jul 22 12:14:56.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2269x ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 22 12:14:56.497: INFO: stderr: "I0722 12:14:56.413194    2130 log.go:172] (0xc0006062c0) (0xc0007f0780) Create stream\nI0722 12:14:56.413260    2130 log.go:172] (0xc0006062c0) (0xc0007f0780) Stream added, broadcasting: 1\nI0722 12:14:56.417468    2130 log.go:172] (0xc0006062c0) Reply frame received for 1\nI0722 12:14:56.417538    2130 log.go:172] (0xc0006062c0) (0xc0003a9cc0) Create stream\nI0722 12:14:56.417565    2130 log.go:172] (0xc0006062c0) (0xc0003a9cc0) Stream added, broadcasting: 3\nI0722 12:14:56.418608    2130 log.go:172] (0xc0006062c0) Reply frame received for 3\nI0722 12:14:56.418657    2130 log.go:172] (0xc0006062c0) (0xc0007f0000) Create stream\nI0722 12:14:56.418677    2130 log.go:172] (0xc0006062c0) (0xc0007f0000) Stream added, broadcasting: 5\nI0722 12:14:56.419769    2130 log.go:172] (0xc0006062c0) Reply frame received for 5\nI0722 12:14:56.492511    2130 log.go:172] (0xc0006062c0) Data frame received for 3\nI0722 12:14:56.492544    2130 log.go:172] (0xc0003a9cc0) (3) Data frame handling\nI0722 12:14:56.492563    2130 log.go:172] (0xc0003a9cc0) (3) Data frame sent\nI0722 12:14:56.492573    2130 log.go:172] (0xc0006062c0) Data frame received for 3\nI0722 12:14:56.492580    2130 log.go:172] (0xc0003a9cc0) (3) Data frame handling\nI0722 12:14:56.492704    2130 log.go:172] (0xc0006062c0) Data frame received for 5\nI0722 12:14:56.492833    2130 log.go:172] (0xc0007f0000) (5) Data frame handling\nI0722 12:14:56.494023    2130 log.go:172] (0xc0006062c0) Data frame received for 1\nI0722 12:14:56.494039    2130 log.go:172] (0xc0007f0780) (1) Data frame handling\nI0722 12:14:56.494053    2130 log.go:172] (0xc0007f0780) (1) Data frame sent\nI0722 12:14:56.494069    2130 log.go:172] (0xc0006062c0) (0xc0007f0780) Stream removed, broadcasting: 1\nI0722 12:14:56.494080    2130 log.go:172] (0xc0006062c0) Go away received\nI0722 12:14:56.494278    2130 log.go:172] (0xc0006062c0) (0xc0007f0780) Stream removed, broadcasting: 1\nI0722 12:14:56.494297    2130 log.go:172] (0xc0006062c0) (0xc0003a9cc0) Stream removed, broadcasting: 3\nI0722 12:14:56.494305    2130 log.go:172] (0xc0006062c0) (0xc0007f0000) Stream removed, broadcasting: 5\n"
Jul 22 12:14:56.497: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 22 12:14:56.497: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 22 12:14:56.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2269x ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 22 12:14:56.692: INFO: stderr: "I0722 12:14:56.620707    2153 log.go:172] (0xc0007f42c0) (0xc00070c5a0) Create stream\nI0722 12:14:56.620875    2153 log.go:172] (0xc0007f42c0) (0xc00070c5a0) Stream added, broadcasting: 1\nI0722 12:14:56.623643    2153 log.go:172] (0xc0007f42c0) Reply frame received for 1\nI0722 12:14:56.623695    2153 log.go:172] (0xc0007f42c0) (0xc00070c640) Create stream\nI0722 12:14:56.623711    2153 log.go:172] (0xc0007f42c0) (0xc00070c640) Stream added, broadcasting: 3\nI0722 12:14:56.625252    2153 log.go:172] (0xc0007f42c0) Reply frame received for 3\nI0722 12:14:56.625308    2153 log.go:172] (0xc0007f42c0) (0xc0005e0c80) Create stream\nI0722 12:14:56.625330    2153 log.go:172] (0xc0007f42c0) (0xc0005e0c80) Stream added, broadcasting: 5\nI0722 12:14:56.626432    2153 log.go:172] (0xc0007f42c0) Reply frame received for 5\nI0722 12:14:56.686823    2153 log.go:172] (0xc0007f42c0) Data frame received for 3\nI0722 12:14:56.686853    2153 log.go:172] (0xc00070c640) (3) Data frame handling\nI0722 12:14:56.686866    2153 log.go:172] (0xc00070c640) (3) Data frame sent\nI0722 12:14:56.686876    2153 log.go:172] (0xc0007f42c0) Data frame received for 3\nI0722 12:14:56.686883    2153 log.go:172] (0xc00070c640) (3) Data frame handling\nI0722 12:14:56.686963    2153 log.go:172] (0xc0007f42c0) Data frame received for 5\nI0722 12:14:56.686977    2153 log.go:172] (0xc0005e0c80) (5) Data frame handling\nI0722 12:14:56.686992    2153 log.go:172] (0xc0005e0c80) (5) Data frame sent\nI0722 12:14:56.687001    2153 log.go:172] (0xc0007f42c0) Data frame received for 5\nI0722 12:14:56.687009    2153 log.go:172] (0xc0005e0c80) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0722 12:14:56.688932    2153 log.go:172] (0xc0007f42c0) Data frame received for 1\nI0722 12:14:56.688954    2153 log.go:172] (0xc00070c5a0) (1) Data frame handling\nI0722 12:14:56.688968    2153 log.go:172] (0xc00070c5a0) (1) Data frame sent\nI0722 12:14:56.688987    2153 log.go:172] (0xc0007f42c0) (0xc00070c5a0) Stream removed, broadcasting: 1\nI0722 12:14:56.689007    2153 log.go:172] (0xc0007f42c0) Go away received\nI0722 12:14:56.689255    2153 log.go:172] (0xc0007f42c0) (0xc00070c5a0) Stream removed, broadcasting: 1\nI0722 12:14:56.689275    2153 log.go:172] (0xc0007f42c0) (0xc00070c640) Stream removed, broadcasting: 3\nI0722 12:14:56.689286    2153 log.go:172] (0xc0007f42c0) (0xc0005e0c80) Stream removed, broadcasting: 5\n"
Jul 22 12:14:56.692: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 22 12:14:56.692: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 22 12:14:56.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2269x ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 22 12:14:56.903: INFO: stderr: "I0722 12:14:56.819699    2175 log.go:172] (0xc000138580) (0xc0007185a0) Create stream\nI0722 12:14:56.819756    2175 log.go:172] (0xc000138580) (0xc0007185a0) Stream added, broadcasting: 1\nI0722 12:14:56.822221    2175 log.go:172] (0xc000138580) Reply frame received for 1\nI0722 12:14:56.822291    2175 log.go:172] (0xc000138580) (0xc000120d20) Create stream\nI0722 12:14:56.822308    2175 log.go:172] (0xc000138580) (0xc000120d20) Stream added, broadcasting: 3\nI0722 12:14:56.823345    2175 log.go:172] (0xc000138580) Reply frame received for 3\nI0722 12:14:56.823383    2175 log.go:172] (0xc000138580) (0xc000718640) Create stream\nI0722 12:14:56.823395    2175 log.go:172] (0xc000138580) (0xc000718640) Stream added, broadcasting: 5\nI0722 12:14:56.824389    2175 log.go:172] (0xc000138580) Reply frame received for 5\nI0722 12:14:56.898078    2175 log.go:172] (0xc000138580) Data frame received for 5\nI0722 12:14:56.898115    2175 log.go:172] (0xc000718640) (5) Data frame handling\nI0722 12:14:56.898129    2175 log.go:172] (0xc000718640) (5) Data frame sent\nI0722 12:14:56.898138    2175 log.go:172] (0xc000138580) Data frame received for 5\nI0722 12:14:56.898147    2175 log.go:172] (0xc000718640) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\nI0722 12:14:56.898175    2175 log.go:172] (0xc000138580) Data frame received for 3\nI0722 12:14:56.898196    2175 log.go:172] (0xc000120d20) (3) Data frame handling\nI0722 12:14:56.898217    2175 log.go:172] (0xc000120d20) (3) Data frame sent\nI0722 12:14:56.898227    2175 log.go:172] (0xc000138580) Data frame received for 3\nI0722 12:14:56.898238    2175 log.go:172] (0xc000120d20) (3) Data frame handling\nI0722 12:14:56.899495    2175 log.go:172] (0xc000138580) Data frame received for 1\nI0722 12:14:56.899525    2175 log.go:172] (0xc0007185a0) (1) Data frame handling\nI0722 12:14:56.899545    2175 log.go:172] (0xc0007185a0) (1) Data frame sent\nI0722 12:14:56.899563    2175 log.go:172] (0xc000138580) (0xc0007185a0) Stream removed, broadcasting: 1\nI0722 12:14:56.899586    2175 log.go:172] (0xc000138580) Go away received\nI0722 12:14:56.899784    2175 log.go:172] (0xc000138580) (0xc0007185a0) Stream removed, broadcasting: 1\nI0722 12:14:56.899822    2175 log.go:172] (0xc000138580) (0xc000120d20) Stream removed, broadcasting: 3\nI0722 12:14:56.899838    2175 log.go:172] (0xc000138580) (0xc000718640) Stream removed, broadcasting: 5\n"
Jul 22 12:14:56.903: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 22 12:14:56.903: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 22 12:14:56.907: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Jul 22 12:15:06.913: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 22 12:15:06.913: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 22 12:15:06.913: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jul 22 12:15:06.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2269x ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 22 12:15:07.134: INFO: stderr: "I0722 12:15:07.060630    2198 log.go:172] (0xc0007fe2c0) (0xc0005c34a0) Create stream\nI0722 12:15:07.060680    2198 log.go:172] (0xc0007fe2c0) (0xc0005c34a0) Stream added, broadcasting: 1\nI0722 12:15:07.062795    2198 log.go:172] (0xc0007fe2c0) Reply frame received for 1\nI0722 12:15:07.062837    2198 log.go:172] (0xc0007fe2c0) (0xc000372000) Create stream\nI0722 12:15:07.062848    2198 log.go:172] (0xc0007fe2c0) (0xc000372000) Stream added, broadcasting: 3\nI0722 12:15:07.063574    2198 log.go:172] (0xc0007fe2c0) Reply frame received for 3\nI0722 12:15:07.063600    2198 log.go:172] (0xc0007fe2c0) (0xc0005c3540) Create stream\nI0722 12:15:07.063608    2198 log.go:172] (0xc0007fe2c0) (0xc0005c3540) Stream added, broadcasting: 5\nI0722 12:15:07.064224    2198 log.go:172] (0xc0007fe2c0) Reply frame received for 5\nI0722 12:15:07.128491    2198 log.go:172] (0xc0007fe2c0) Data frame received for 3\nI0722 12:15:07.128547    2198 log.go:172] (0xc000372000) (3) Data frame handling\nI0722 12:15:07.128565    2198 log.go:172] (0xc000372000) (3) Data frame sent\nI0722 12:15:07.128580    2198 log.go:172] (0xc0007fe2c0) Data frame received for 3\nI0722 12:15:07.128590    2198 log.go:172] (0xc000372000) (3) Data frame handling\nI0722 12:15:07.128631    2198 log.go:172] (0xc0007fe2c0) Data frame received for 5\nI0722 12:15:07.128655    2198 log.go:172] (0xc0005c3540) (5) Data frame handling\nI0722 12:15:07.130012    2198 log.go:172] (0xc0007fe2c0) Data frame received for 1\nI0722 12:15:07.130043    2198 log.go:172] (0xc0005c34a0) (1) Data frame handling\nI0722 12:15:07.130077    2198 log.go:172] (0xc0005c34a0) (1) Data frame sent\nI0722 12:15:07.130174    2198 log.go:172] (0xc0007fe2c0) (0xc0005c34a0) Stream removed, broadcasting: 1\nI0722 12:15:07.130232    2198 log.go:172] (0xc0007fe2c0) Go away received\nI0722 12:15:07.130459    2198 log.go:172] (0xc0007fe2c0) (0xc0005c34a0) Stream removed, broadcasting: 1\nI0722 12:15:07.130489    2198 log.go:172] (0xc0007fe2c0) (0xc000372000) Stream removed, broadcasting: 3\nI0722 12:15:07.130504    2198 log.go:172] (0xc0007fe2c0) (0xc0005c3540) Stream removed, broadcasting: 5\n"
Jul 22 12:15:07.134: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 22 12:15:07.134: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 22 12:15:07.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2269x ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 22 12:15:07.362: INFO: stderr: "I0722 12:15:07.261484    2221 log.go:172] (0xc000138840) (0xc000325400) Create stream\nI0722 12:15:07.261544    2221 log.go:172] (0xc000138840) (0xc000325400) Stream added, broadcasting: 1\nI0722 12:15:07.264387    2221 log.go:172] (0xc000138840) Reply frame received for 1\nI0722 12:15:07.264465    2221 log.go:172] (0xc000138840) (0xc000768000) Create stream\nI0722 12:15:07.264488    2221 log.go:172] (0xc000138840) (0xc000768000) Stream added, broadcasting: 3\nI0722 12:15:07.265658    2221 log.go:172] (0xc000138840) Reply frame received for 3\nI0722 12:15:07.265707    2221 log.go:172] (0xc000138840) (0xc0006b6000) Create stream\nI0722 12:15:07.265747    2221 log.go:172] (0xc000138840) (0xc0006b6000) Stream added, broadcasting: 5\nI0722 12:15:07.266589    2221 log.go:172] (0xc000138840) Reply frame received for 5\nI0722 12:15:07.357039    2221 log.go:172] (0xc000138840) Data frame received for 3\nI0722 12:15:07.357076    2221 log.go:172] (0xc000138840) Data frame received for 5\nI0722 12:15:07.357100    2221 log.go:172] (0xc000768000) (3) Data frame handling\nI0722 12:15:07.357129    2221 log.go:172] (0xc000768000) (3) Data frame sent\nI0722 12:15:07.357173    2221 log.go:172] (0xc000138840) Data frame received for 3\nI0722 12:15:07.357184    2221 log.go:172] (0xc000768000) (3) Data frame handling\nI0722 12:15:07.357206    2221 log.go:172] (0xc0006b6000) (5) Data frame handling\nI0722 12:15:07.358760    2221 log.go:172] (0xc000138840) Data frame received for 1\nI0722 12:15:07.358785    2221 log.go:172] (0xc000325400) (1) Data frame handling\nI0722 12:15:07.358800    2221 log.go:172] (0xc000325400) (1) Data frame sent\nI0722 12:15:07.358813    2221 log.go:172] (0xc000138840) (0xc000325400) Stream removed, broadcasting: 1\nI0722 12:15:07.358837    2221 log.go:172] (0xc000138840) Go away received\nI0722 12:15:07.358962    2221 log.go:172] (0xc000138840) (0xc000325400) Stream removed, broadcasting: 1\nI0722 12:15:07.358979    2221 log.go:172] (0xc000138840) (0xc000768000) Stream removed, broadcasting: 3\nI0722 12:15:07.358989    2221 log.go:172] (0xc000138840) (0xc0006b6000) Stream removed, broadcasting: 5\n"
Jul 22 12:15:07.363: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 22 12:15:07.363: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 22 12:15:07.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-2269x ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 22 12:15:07.574: INFO: stderr: "I0722 12:15:07.472068    2243 log.go:172] (0xc00014c840) (0xc000760640) Create stream\nI0722 12:15:07.472132    2243 log.go:172] (0xc00014c840) (0xc000760640) Stream added, broadcasting: 1\nI0722 12:15:07.474560    2243 log.go:172] (0xc00014c840) Reply frame received for 1\nI0722 12:15:07.474595    2243 log.go:172] (0xc00014c840) (0xc000652c80) Create stream\nI0722 12:15:07.474607    2243 log.go:172] (0xc00014c840) (0xc000652c80) Stream added, broadcasting: 3\nI0722 12:15:07.475568    2243 log.go:172] (0xc00014c840) Reply frame received for 3\nI0722 12:15:07.475612    2243 log.go:172] (0xc00014c840) (0xc000378000) Create stream\nI0722 12:15:07.475624    2243 log.go:172] (0xc00014c840) (0xc000378000) Stream added, broadcasting: 5\nI0722 12:15:07.476525    2243 log.go:172] (0xc00014c840) Reply frame received for 5\nI0722 12:15:07.567045    2243 log.go:172] (0xc00014c840) Data frame received for 5\nI0722 12:15:07.567068    2243 log.go:172] (0xc000378000) (5) Data frame handling\nI0722 12:15:07.567105    2243 log.go:172] (0xc00014c840) Data frame received for 3\nI0722 12:15:07.567134    2243 log.go:172] (0xc000652c80) (3) Data frame handling\nI0722 12:15:07.567152    2243 log.go:172] (0xc000652c80) (3) Data frame sent\nI0722 12:15:07.567160    2243 log.go:172] (0xc00014c840) Data frame received for 3\nI0722 12:15:07.567168    2243 log.go:172] (0xc000652c80) (3) Data frame handling\nI0722 12:15:07.569298    2243 log.go:172] (0xc00014c840) Data frame received for 1\nI0722 12:15:07.569333    2243 log.go:172] (0xc000760640) (1) Data frame handling\nI0722 12:15:07.569356    2243 log.go:172] (0xc000760640) (1) Data frame sent\nI0722 12:15:07.569392    2243 log.go:172] (0xc00014c840) (0xc000760640) Stream removed, broadcasting: 1\nI0722 12:15:07.569523    2243 log.go:172] (0xc00014c840) Go away received\nI0722 12:15:07.569716    2243 log.go:172] (0xc00014c840) (0xc000760640) Stream removed, broadcasting: 1\nI0722 12:15:07.569757    2243 log.go:172] (0xc00014c840) (0xc000652c80) Stream removed, broadcasting: 3\nI0722 12:15:07.569784    2243 log.go:172] (0xc00014c840) (0xc000378000) Stream removed, broadcasting: 5\n"
Jul 22 12:15:07.574: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 22 12:15:07.574: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 22 12:15:07.574: INFO: Waiting for statefulset status.replicas updated to 0
Jul 22 12:15:07.578: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Jul 22 12:15:17.626: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 22 12:15:17.626: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul 22 12:15:17.626: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul 22 12:15:17.674: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 22 12:15:17.674: INFO: ss-0  hunter-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  }]
Jul 22 12:15:17.674: INFO: ss-1  hunter-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  }]
Jul 22 12:15:17.674: INFO: ss-2  hunter-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  }]
Jul 22 12:15:17.675: INFO: 
Jul 22 12:15:17.675: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 22 12:15:18.680: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 22 12:15:18.680: INFO: ss-0  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  }]
Jul 22 12:15:18.680: INFO: ss-1  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  }]
Jul 22 12:15:18.680: INFO: ss-2  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  }]
Jul 22 12:15:18.680: INFO: 
Jul 22 12:15:18.680: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 22 12:15:19.685: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 22 12:15:19.685: INFO: ss-0  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  }]
Jul 22 12:15:19.685: INFO: ss-1  hunter-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  }]
Jul 22 12:15:19.685: INFO: ss-2  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  }]
Jul 22 12:15:19.685: INFO: 
Jul 22 12:15:19.685: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 22 12:15:20.690: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 22 12:15:20.690: INFO: ss-0  hunter-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  }]
Jul 22 12:15:20.690: INFO: ss-1  hunter-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  }]
Jul 22 12:15:20.690: INFO: ss-2  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  }]
Jul 22 12:15:20.690: INFO: 
Jul 22 12:15:20.690: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 22 12:15:21.695: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 22 12:15:21.695: INFO: ss-0  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  }]
Jul 22 12:15:21.695: INFO: ss-1  hunter-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  }]
Jul 22 12:15:21.695: INFO: ss-2  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  }]
Jul 22 12:15:21.695: INFO: 
Jul 22 12:15:21.695: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 22 12:15:22.699: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 22 12:15:22.699: INFO: ss-0  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  }]
Jul 22 12:15:22.699: INFO: ss-1  hunter-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  }]
Jul 22 12:15:22.699: INFO: ss-2  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  }]
Jul 22 12:15:22.700: INFO: 
Jul 22 12:15:22.700: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 22 12:15:23.705: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 22 12:15:23.705: INFO: ss-0  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  }]
Jul 22 12:15:23.705: INFO: ss-1  hunter-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  }]
Jul 22 12:15:23.705: INFO: ss-2  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  }]
Jul 22 12:15:23.705: INFO: 
Jul 22 12:15:23.705: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 22 12:15:24.709: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 22 12:15:24.709: INFO: ss-0  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  }]
Jul 22 12:15:24.709: INFO: ss-1  hunter-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  }]
Jul 22 12:15:24.710: INFO: ss-2  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  }]
Jul 22 12:15:24.710: INFO: 
Jul 22 12:15:24.710: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 22 12:15:25.715: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 22 12:15:25.715: INFO: ss-0  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  }]
Jul 22 12:15:25.715: INFO: ss-1  hunter-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  }]
Jul 22 12:15:25.715: INFO: ss-2  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  }]
Jul 22 12:15:25.715: INFO: 
Jul 22 12:15:25.715: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 22 12:15:26.719: INFO: POD   NODE            PHASE    GRACE  CONDITIONS
Jul 22 12:15:26.719: INFO: ss-0  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:25 +0000 UTC  }]
Jul 22 12:15:26.719: INFO: ss-1  hunter-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  }]
Jul 22 12:15:26.719: INFO: ss-2  hunter-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:15:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-22 12:14:46 +0000 UTC  }]
Jul 22 12:15:26.719: INFO: 
Jul 22 12:15:26.719: INFO: StatefulSet ss has not reached scale 0, at 3
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-2269x
Jul 22 12:15:27.724: INFO: Scaling statefulset ss to 0
Jul 22 12:15:27.732: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul 22 12:15:27.734: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2269x
Jul 22 12:15:27.736: INFO: Scaling statefulset ss to 0
Jul 22 12:15:27.743: INFO: Waiting for statefulset status.replicas updated to 0
Jul 22 12:15:27.745: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:15:27.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-2269x" for this suite.
Jul 22 12:15:33.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:15:33.819: INFO: namespace: e2e-tests-statefulset-2269x, resource: bindings, ignored listing per whitelist
Jul 22 12:15:33.919: INFO: namespace e2e-tests-statefulset-2269x deletion completed in 6.131841749s

• [SLOW TEST:68.305 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:15:33.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-0b0c2137-cc15-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume secrets
Jul 22 12:15:34.053: INFO: Waiting up to 5m0s for pod "pod-secrets-0b143a81-cc15-11ea-aa05-0242ac11000b" in namespace "e2e-tests-secrets-gp6cj" to be "success or failure"
Jul 22 12:15:34.059: INFO: Pod "pod-secrets-0b143a81-cc15-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.780042ms
Jul 22 12:15:36.062: INFO: Pod "pod-secrets-0b143a81-cc15-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009006094s
Jul 22 12:15:38.066: INFO: Pod "pod-secrets-0b143a81-cc15-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012860296s
STEP: Saw pod success
Jul 22 12:15:38.066: INFO: Pod "pod-secrets-0b143a81-cc15-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 12:15:38.068: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-0b143a81-cc15-11ea-aa05-0242ac11000b container secret-volume-test: 
STEP: delete the pod
Jul 22 12:15:38.304: INFO: Waiting for pod pod-secrets-0b143a81-cc15-11ea-aa05-0242ac11000b to disappear
Jul 22 12:15:38.365: INFO: Pod pod-secrets-0b143a81-cc15-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:15:38.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-gp6cj" for this suite.
Jul 22 12:15:44.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:15:44.526: INFO: namespace: e2e-tests-secrets-gp6cj, resource: bindings, ignored listing per whitelist
Jul 22 12:15:44.526: INFO: namespace e2e-tests-secrets-gp6cj deletion completed in 6.157833882s

• [SLOW TEST:10.607 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:15:44.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-gs8w5
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a new StatefulSet
Jul 22 12:15:44.667: INFO: Found 0 stateful pods, waiting for 3
Jul 22 12:15:54.695: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 22 12:15:54.695: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 22 12:15:54.695: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul 22 12:16:04.731: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 22 12:16:04.731: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 22 12:16:04.731: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jul 22 12:16:04.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gs8w5 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 22 12:16:05.017: INFO: stderr: "I0722 12:16:04.883095    2265 log.go:172] (0xc000744370) (0xc000778640) Create stream\nI0722 12:16:04.883170    2265 log.go:172] (0xc000744370) (0xc000778640) Stream added, broadcasting: 1\nI0722 12:16:04.885613    2265 log.go:172] (0xc000744370) Reply frame received for 1\nI0722 12:16:04.885662    2265 log.go:172] (0xc000744370) (0xc0007786e0) Create stream\nI0722 12:16:04.885673    2265 log.go:172] (0xc000744370) (0xc0007786e0) Stream added, broadcasting: 3\nI0722 12:16:04.886615    2265 log.go:172] (0xc000744370) Reply frame received for 3\nI0722 12:16:04.886647    2265 log.go:172] (0xc000744370) (0xc000616dc0) Create stream\nI0722 12:16:04.886655    2265 log.go:172] (0xc000744370) (0xc000616dc0) Stream added, broadcasting: 5\nI0722 12:16:04.887573    2265 log.go:172] (0xc000744370) Reply frame received for 5\nI0722 12:16:05.010010    2265 log.go:172] (0xc000744370) Data frame received for 3\nI0722 12:16:05.010058    2265 log.go:172] (0xc0007786e0) (3) Data frame handling\nI0722 12:16:05.010073    2265 log.go:172] (0xc0007786e0) (3) Data frame sent\nI0722 12:16:05.010084    2265 log.go:172] (0xc000744370) Data frame received for 3\nI0722 12:16:05.010094    2265 log.go:172] (0xc0007786e0) (3) Data frame handling\nI0722 12:16:05.010132    2265 log.go:172] (0xc000744370) Data frame received for 5\nI0722 12:16:05.010152    2265 log.go:172] (0xc000616dc0) (5) Data frame handling\nI0722 12:16:05.012489    2265 log.go:172] (0xc000744370) Data frame received for 1\nI0722 12:16:05.012520    2265 log.go:172] (0xc000778640) (1) Data frame handling\nI0722 12:16:05.012544    2265 log.go:172] (0xc000778640) (1) Data frame sent\nI0722 12:16:05.012562    2265 log.go:172] (0xc000744370) (0xc000778640) Stream removed, broadcasting: 1\nI0722 12:16:05.012578    2265 log.go:172] (0xc000744370) Go away received\nI0722 12:16:05.012996    2265 log.go:172] (0xc000744370) (0xc000778640) Stream removed, broadcasting: 1\nI0722 12:16:05.013043    2265 log.go:172] (0xc000744370) (0xc0007786e0) Stream removed, broadcasting: 3\nI0722 12:16:05.013069    2265 log.go:172] (0xc000744370) (0xc000616dc0) Stream removed, broadcasting: 5\n"
Jul 22 12:16:05.017: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 22 12:16:05.017: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jul 22 12:16:15.048: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jul 22 12:16:25.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gs8w5 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 22 12:16:25.276: INFO: stderr: "I0722 12:16:25.193814    2287 log.go:172] (0xc00015c840) (0xc000764640) Create stream\nI0722 12:16:25.193862    2287 log.go:172] (0xc00015c840) (0xc000764640) Stream added, broadcasting: 1\nI0722 12:16:25.195602    2287 log.go:172] (0xc00015c840) Reply frame received for 1\nI0722 12:16:25.195629    2287 log.go:172] (0xc00015c840) (0xc000654c80) Create stream\nI0722 12:16:25.195636    2287 log.go:172] (0xc00015c840) (0xc000654c80) Stream added, broadcasting: 3\nI0722 12:16:25.196228    2287 log.go:172] (0xc00015c840) Reply frame received for 3\nI0722 12:16:25.196289    2287 log.go:172] (0xc00015c840) (0xc000586000) Create stream\nI0722 12:16:25.196299    2287 log.go:172] (0xc00015c840) (0xc000586000) Stream added, broadcasting: 5\nI0722 12:16:25.197031    2287 log.go:172] (0xc00015c840) Reply frame received for 5\nI0722 12:16:25.269829    2287 log.go:172] (0xc00015c840) Data frame received for 5\nI0722 12:16:25.269878    2287 log.go:172] (0xc000586000) (5) Data frame handling\nI0722 12:16:25.269918    2287 log.go:172] (0xc00015c840) Data frame received for 3\nI0722 12:16:25.269953    2287 log.go:172] (0xc000654c80) (3) Data frame handling\nI0722 12:16:25.270011    2287 log.go:172] (0xc000654c80) (3) Data frame sent\nI0722 12:16:25.270032    2287 log.go:172] (0xc00015c840) Data frame received for 3\nI0722 12:16:25.270043    2287 log.go:172] (0xc000654c80) (3) Data frame handling\nI0722 12:16:25.272092    2287 log.go:172] (0xc00015c840) Data frame received for 1\nI0722 12:16:25.272133    2287 log.go:172] (0xc000764640) (1) Data frame handling\nI0722 12:16:25.272174    2287 log.go:172] (0xc000764640) (1) Data frame sent\nI0722 12:16:25.272189    2287 log.go:172] (0xc00015c840) (0xc000764640) Stream removed, broadcasting: 1\nI0722 12:16:25.272212    2287 log.go:172] (0xc00015c840) Go away received\nI0722 12:16:25.272579    2287 log.go:172] (0xc00015c840) (0xc000764640) Stream removed, broadcasting: 1\nI0722 12:16:25.272610    2287 log.go:172] (0xc00015c840) (0xc000654c80) Stream removed, broadcasting: 3\nI0722 12:16:25.272622    2287 log.go:172] (0xc00015c840) (0xc000586000) Stream removed, broadcasting: 5\n"
Jul 22 12:16:25.277: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 22 12:16:25.277: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

STEP: Rolling back to a previous revision
Jul 22 12:16:45.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gs8w5 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul 22 12:16:45.581: INFO: stderr: "I0722 12:16:45.417343    2310 log.go:172] (0xc00013c840) (0xc0006652c0) Create stream\nI0722 12:16:45.417402    2310 log.go:172] (0xc00013c840) (0xc0006652c0) Stream added, broadcasting: 1\nI0722 12:16:45.419803    2310 log.go:172] (0xc00013c840) Reply frame received for 1\nI0722 12:16:45.419864    2310 log.go:172] (0xc00013c840) (0xc0007b8000) Create stream\nI0722 12:16:45.419884    2310 log.go:172] (0xc00013c840) (0xc0007b8000) Stream added, broadcasting: 3\nI0722 12:16:45.420796    2310 log.go:172] (0xc00013c840) Reply frame received for 3\nI0722 12:16:45.420836    2310 log.go:172] (0xc00013c840) (0xc000665360) Create stream\nI0722 12:16:45.420847    2310 log.go:172] (0xc00013c840) (0xc000665360) Stream added, broadcasting: 5\nI0722 12:16:45.421698    2310 log.go:172] (0xc00013c840) Reply frame received for 5\nI0722 12:16:45.573975    2310 log.go:172] (0xc00013c840) Data frame received for 5\nI0722 12:16:45.574024    2310 log.go:172] (0xc000665360) (5) Data frame handling\nI0722 12:16:45.574076    2310 log.go:172] (0xc00013c840) Data frame received for 3\nI0722 12:16:45.574095    2310 log.go:172] (0xc0007b8000) (3) Data frame handling\nI0722 12:16:45.574115    2310 log.go:172] (0xc0007b8000) (3) Data frame sent\nI0722 12:16:45.574709    2310 log.go:172] (0xc00013c840) Data frame received for 3\nI0722 12:16:45.574748    2310 log.go:172] (0xc0007b8000) (3) Data frame handling\nI0722 12:16:45.576584    2310 log.go:172] (0xc00013c840) Data frame received for 1\nI0722 12:16:45.576677    2310 log.go:172] (0xc0006652c0) (1) Data frame handling\nI0722 12:16:45.576702    2310 log.go:172] (0xc0006652c0) (1) Data frame sent\nI0722 12:16:45.576714    2310 log.go:172] (0xc00013c840) (0xc0006652c0) Stream removed, broadcasting: 1\nI0722 12:16:45.576978    2310 log.go:172] (0xc00013c840) (0xc0006652c0) Stream removed, broadcasting: 1\nI0722 12:16:45.577004    2310 log.go:172] (0xc00013c840) (0xc0007b8000) Stream removed, broadcasting: 3\nI0722 12:16:45.577014    2310 log.go:172] (0xc00013c840) (0xc000665360) Stream removed, broadcasting: 5\n"
Jul 22 12:16:45.582: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul 22 12:16:45.582: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul 22 12:16:55.611: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jul 22 12:17:05.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=e2e-tests-statefulset-gs8w5 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul 22 12:17:05.875: INFO: stderr: "I0722 12:17:05.782579    2332 log.go:172] (0xc0007ee4d0) (0xc0006f6640) Create stream\nI0722 12:17:05.782625    2332 log.go:172] (0xc0007ee4d0) (0xc0006f6640) Stream added, broadcasting: 1\nI0722 12:17:05.784625    2332 log.go:172] (0xc0007ee4d0) Reply frame received for 1\nI0722 12:17:05.784664    2332 log.go:172] (0xc0007ee4d0) (0xc0006f66e0) Create stream\nI0722 12:17:05.784675    2332 log.go:172] (0xc0007ee4d0) (0xc0006f66e0) Stream added, broadcasting: 3\nI0722 12:17:05.785627    2332 log.go:172] (0xc0007ee4d0) Reply frame received for 3\nI0722 12:17:05.785647    2332 log.go:172] (0xc0007ee4d0) (0xc0005d2dc0) Create stream\nI0722 12:17:05.785654    2332 log.go:172] (0xc0007ee4d0) (0xc0005d2dc0) Stream added, broadcasting: 5\nI0722 12:17:05.786499    2332 log.go:172] (0xc0007ee4d0) Reply frame received for 5\nI0722 12:17:05.868934    2332 log.go:172] (0xc0007ee4d0) Data frame received for 5\nI0722 12:17:05.868967    2332 log.go:172] (0xc0005d2dc0) (5) Data frame handling\nI0722 12:17:05.868990    2332 log.go:172] (0xc0007ee4d0) Data frame received for 3\nI0722 12:17:05.868997    2332 log.go:172] (0xc0006f66e0) (3) Data frame handling\nI0722 12:17:05.869006    2332 log.go:172] (0xc0006f66e0) (3) Data frame sent\nI0722 12:17:05.869013    2332 log.go:172] (0xc0007ee4d0) Data frame received for 3\nI0722 12:17:05.869019    2332 log.go:172] (0xc0006f66e0) (3) Data frame handling\nI0722 12:17:05.870623    2332 log.go:172] (0xc0007ee4d0) Data frame received for 1\nI0722 12:17:05.870665    2332 log.go:172] (0xc0006f6640) (1) Data frame handling\nI0722 12:17:05.870688    2332 log.go:172] (0xc0006f6640) (1) Data frame sent\nI0722 12:17:05.870730    2332 log.go:172] (0xc0007ee4d0) (0xc0006f6640) Stream removed, broadcasting: 1\nI0722 12:17:05.871012    2332 log.go:172] (0xc0007ee4d0) (0xc0006f6640) Stream removed, broadcasting: 1\nI0722 12:17:05.871056    2332 log.go:172] (0xc0007ee4d0) Go away received\nI0722 12:17:05.871114    2332 log.go:172] (0xc0007ee4d0) (0xc0006f66e0) Stream removed, broadcasting: 3\nI0722 12:17:05.871157    2332 log.go:172] (0xc0007ee4d0) (0xc0005d2dc0) Stream removed, broadcasting: 5\n"
Jul 22 12:17:05.875: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul 22 12:17:05.875: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul 22 12:17:15.896: INFO: Waiting for StatefulSet e2e-tests-statefulset-gs8w5/ss2 to complete update
Jul 22 12:17:15.896: INFO: Waiting for Pod e2e-tests-statefulset-gs8w5/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jul 22 12:17:15.896: INFO: Waiting for Pod e2e-tests-statefulset-gs8w5/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jul 22 12:17:15.896: INFO: Waiting for Pod e2e-tests-statefulset-gs8w5/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jul 22 12:17:25.904: INFO: Waiting for StatefulSet e2e-tests-statefulset-gs8w5/ss2 to complete update
Jul 22 12:17:25.904: INFO: Waiting for Pod e2e-tests-statefulset-gs8w5/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jul 22 12:17:35.904: INFO: Waiting for StatefulSet e2e-tests-statefulset-gs8w5/ss2 to complete update
Jul 22 12:17:35.904: INFO: Waiting for Pod e2e-tests-statefulset-gs8w5/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul 22 12:17:45.904: INFO: Deleting all statefulset in ns e2e-tests-statefulset-gs8w5
Jul 22 12:17:45.907: INFO: Scaling statefulset ss2 to 0
Jul 22 12:18:15.930: INFO: Waiting for statefulset status.replicas updated to 0
Jul 22 12:18:15.933: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:18:15.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-gs8w5" for this suite.
Jul 22 12:18:24.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:18:24.057: INFO: namespace: e2e-tests-statefulset-gs8w5, resource: bindings, ignored listing per whitelist
Jul 22 12:18:24.133: INFO: namespace e2e-tests-statefulset-gs8w5 deletion completed in 8.1828162s

• [SLOW TEST:159.607 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:18:24.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
Jul 22 12:18:24.275: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:18:32.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-init-container-l8klz" for this suite.
Jul 22 12:18:38.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:18:38.251: INFO: namespace: e2e-tests-init-container-l8klz, resource: bindings, ignored listing per whitelist
Jul 22 12:18:38.269: INFO: namespace e2e-tests-init-container-l8klz deletion completed in 6.099206918s

• [SLOW TEST:14.135 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:18:38.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: starting the proxy server
Jul 22 12:18:38.365: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:18:38.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-5pnm6" for this suite.
Jul 22 12:18:44.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:18:44.527: INFO: namespace: e2e-tests-kubectl-5pnm6, resource: bindings, ignored listing per whitelist
Jul 22 12:18:44.556: INFO: namespace e2e-tests-kubectl-5pnm6 deletion completed in 6.098500565s

• [SLOW TEST:6.287 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:18:44.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-7caca416-cc15-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume secrets
Jul 22 12:18:44.737: INFO: Waiting up to 5m0s for pod "pod-secrets-7cbba6e5-cc15-11ea-aa05-0242ac11000b" in namespace "e2e-tests-secrets-d8c4j" to be "success or failure"
Jul 22 12:18:44.741: INFO: Pod "pod-secrets-7cbba6e5-cc15-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.908637ms
Jul 22 12:18:46.745: INFO: Pod "pod-secrets-7cbba6e5-cc15-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008106224s
Jul 22 12:18:48.750: INFO: Pod "pod-secrets-7cbba6e5-cc15-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012802528s
STEP: Saw pod success
Jul 22 12:18:48.750: INFO: Pod "pod-secrets-7cbba6e5-cc15-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 12:18:48.753: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-7cbba6e5-cc15-11ea-aa05-0242ac11000b container secret-volume-test: 
STEP: delete the pod
Jul 22 12:18:48.786: INFO: Waiting for pod pod-secrets-7cbba6e5-cc15-11ea-aa05-0242ac11000b to disappear
Jul 22 12:18:48.823: INFO: Pod pod-secrets-7cbba6e5-cc15-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:18:48.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-d8c4j" for this suite.
Jul 22 12:18:54.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:18:54.872: INFO: namespace: e2e-tests-secrets-d8c4j, resource: bindings, ignored listing per whitelist
Jul 22 12:18:54.915: INFO: namespace e2e-tests-secrets-d8c4j deletion completed in 6.088600742s
STEP: Destroying namespace "e2e-tests-secret-namespace-8f8ph" for this suite.
Jul 22 12:19:00.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:19:00.975: INFO: namespace: e2e-tests-secret-namespace-8f8ph, resource: bindings, ignored listing per whitelist
Jul 22 12:19:01.010: INFO: namespace e2e-tests-secret-namespace-8f8ph deletion completed in 6.095094368s

• [SLOW TEST:16.454 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:19:01.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0722 12:19:12.159844       7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 22 12:19:12.159: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:19:12.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-4l2m2" for this suite.
Jul 22 12:19:20.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:19:20.322: INFO: namespace: e2e-tests-gc-4l2m2, resource: bindings, ignored listing per whitelist
Jul 22 12:19:20.361: INFO: namespace e2e-tests-gc-4l2m2 deletion completed in 8.198686842s

• [SLOW TEST:19.351 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:19:20.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a replication controller
Jul 22 12:19:20.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-zrlsz'
Jul 22 12:19:23.147: INFO: stderr: ""
Jul 22 12:19:23.147: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 22 12:19:23.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-zrlsz'
Jul 22 12:19:23.279: INFO: stderr: ""
Jul 22 12:19:23.279: INFO: stdout: "update-demo-nautilus-5w9sq update-demo-nautilus-gg5vw "
Jul 22 12:19:23.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5w9sq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zrlsz'
Jul 22 12:19:23.367: INFO: stderr: ""
Jul 22 12:19:23.367: INFO: stdout: ""
Jul 22 12:19:23.367: INFO: update-demo-nautilus-5w9sq is created but not running
Jul 22 12:19:28.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-zrlsz'
Jul 22 12:19:28.471: INFO: stderr: ""
Jul 22 12:19:28.471: INFO: stdout: "update-demo-nautilus-5w9sq update-demo-nautilus-gg5vw "
Jul 22 12:19:28.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5w9sq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zrlsz'
Jul 22 12:19:28.573: INFO: stderr: ""
Jul 22 12:19:28.573: INFO: stdout: "true"
Jul 22 12:19:28.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5w9sq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zrlsz'
Jul 22 12:19:28.681: INFO: stderr: ""
Jul 22 12:19:28.681: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 22 12:19:28.681: INFO: validating pod update-demo-nautilus-5w9sq
Jul 22 12:19:28.686: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 22 12:19:28.686: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 22 12:19:28.686: INFO: update-demo-nautilus-5w9sq is verified up and running
Jul 22 12:19:28.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gg5vw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zrlsz'
Jul 22 12:19:28.789: INFO: stderr: ""
Jul 22 12:19:28.789: INFO: stdout: "true"
Jul 22 12:19:28.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gg5vw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-zrlsz'
Jul 22 12:19:28.883: INFO: stderr: ""
Jul 22 12:19:28.883: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 22 12:19:28.883: INFO: validating pod update-demo-nautilus-gg5vw
Jul 22 12:19:28.888: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 22 12:19:28.888: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 22 12:19:28.888: INFO: update-demo-nautilus-gg5vw is verified up and running
STEP: using delete to clean up resources
Jul 22 12:19:28.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-zrlsz'
Jul 22 12:19:29.014: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 22 12:19:29.014: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul 22 12:19:29.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-zrlsz'
Jul 22 12:19:29.149: INFO: stderr: "No resources found.\n"
Jul 22 12:19:29.149: INFO: stdout: ""
Jul 22 12:19:29.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=e2e-tests-kubectl-zrlsz -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 22 12:19:29.264: INFO: stderr: ""
Jul 22 12:19:29.264: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:19:29.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-zrlsz" for this suite.
Jul 22 12:19:35.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:19:35.641: INFO: namespace: e2e-tests-kubectl-zrlsz, resource: bindings, ignored listing per whitelist
Jul 22 12:19:35.689: INFO: namespace e2e-tests-kubectl-zrlsz deletion completed in 6.418965328s

• [SLOW TEST:15.328 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:19:35.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul 22 12:19:35.848: INFO: Waiting up to 5m0s for pod "pod-9b330583-cc15-11ea-aa05-0242ac11000b" in namespace "e2e-tests-emptydir-tt6hg" to be "success or failure"
Jul 22 12:19:35.878: INFO: Pod "pod-9b330583-cc15-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 29.761861ms
Jul 22 12:19:37.956: INFO: Pod "pod-9b330583-cc15-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107388732s
Jul 22 12:19:39.959: INFO: Pod "pod-9b330583-cc15-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111020465s
STEP: Saw pod success
Jul 22 12:19:39.960: INFO: Pod "pod-9b330583-cc15-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 12:19:39.963: INFO: Trying to get logs from node hunter-worker pod pod-9b330583-cc15-11ea-aa05-0242ac11000b container test-container: 
STEP: delete the pod
Jul 22 12:19:40.047: INFO: Waiting for pod pod-9b330583-cc15-11ea-aa05-0242ac11000b to disappear
Jul 22 12:19:40.066: INFO: Pod pod-9b330583-cc15-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:19:40.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-tt6hg" for this suite.
Jul 22 12:19:46.100: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:19:46.147: INFO: namespace: e2e-tests-emptydir-tt6hg, resource: bindings, ignored listing per whitelist
Jul 22 12:19:46.171: INFO: namespace e2e-tests-emptydir-tt6hg deletion completed in 6.100559151s

• [SLOW TEST:10.482 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:19:46.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-8gct5
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 22 12:19:46.264: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 22 12:20:14.450: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.173:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-8gct5 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 22 12:20:14.450: INFO: >>> kubeConfig: /root/.kube/config
I0722 12:20:14.485233       7 log.go:172] (0xc00194a2c0) (0xc000a65ae0) Create stream
I0722 12:20:14.485262       7 log.go:172] (0xc00194a2c0) (0xc000a65ae0) Stream added, broadcasting: 1
I0722 12:20:14.487348       7 log.go:172] (0xc00194a2c0) Reply frame received for 1
I0722 12:20:14.487389       7 log.go:172] (0xc00194a2c0) (0xc000a65d60) Create stream
I0722 12:20:14.487404       7 log.go:172] (0xc00194a2c0) (0xc000a65d60) Stream added, broadcasting: 3
I0722 12:20:14.488483       7 log.go:172] (0xc00194a2c0) Reply frame received for 3
I0722 12:20:14.488530       7 log.go:172] (0xc00194a2c0) (0xc001324a00) Create stream
I0722 12:20:14.488551       7 log.go:172] (0xc00194a2c0) (0xc001324a00) Stream added, broadcasting: 5
I0722 12:20:14.489691       7 log.go:172] (0xc00194a2c0) Reply frame received for 5
I0722 12:20:14.567343       7 log.go:172] (0xc00194a2c0) Data frame received for 3
I0722 12:20:14.567376       7 log.go:172] (0xc00194a2c0) Data frame received for 5
I0722 12:20:14.567391       7 log.go:172] (0xc001324a00) (5) Data frame handling
I0722 12:20:14.567436       7 log.go:172] (0xc000a65d60) (3) Data frame handling
I0722 12:20:14.567464       7 log.go:172] (0xc000a65d60) (3) Data frame sent
I0722 12:20:14.567476       7 log.go:172] (0xc00194a2c0) Data frame received for 3
I0722 12:20:14.567491       7 log.go:172] (0xc000a65d60) (3) Data frame handling
I0722 12:20:14.569379       7 log.go:172] (0xc00194a2c0) Data frame received for 1
I0722 12:20:14.569400       7 log.go:172] (0xc000a65ae0) (1) Data frame handling
I0722 12:20:14.569417       7 log.go:172] (0xc000a65ae0) (1) Data frame sent
I0722 12:20:14.569439       7 log.go:172] (0xc00194a2c0) (0xc000a65ae0) Stream removed, broadcasting: 1
I0722 12:20:14.569525       7 log.go:172] (0xc00194a2c0) (0xc000a65ae0) Stream removed, broadcasting: 1
I0722 12:20:14.569546       7 log.go:172] (0xc00194a2c0) (0xc000a65d60) Stream removed, broadcasting: 3
I0722 12:20:14.569573       7 log.go:172] (0xc00194a2c0) (0xc001324a00) Stream removed, broadcasting: 5
Jul 22 12:20:14.569: INFO: Found all expected endpoints: [netserver-0]
I0722 12:20:14.569642       7 log.go:172] (0xc00194a2c0) Go away received
Jul 22 12:20:14.572: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.128:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-8gct5 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 22 12:20:14.572: INFO: >>> kubeConfig: /root/.kube/config
I0722 12:20:14.610730       7 log.go:172] (0xc00202a2c0) (0xc001e5cb40) Create stream
I0722 12:20:14.610755       7 log.go:172] (0xc00202a2c0) (0xc001e5cb40) Stream added, broadcasting: 1
I0722 12:20:14.615581       7 log.go:172] (0xc00202a2c0) Reply frame received for 1
I0722 12:20:14.615628       7 log.go:172] (0xc00202a2c0) (0xc0016765a0) Create stream
I0722 12:20:14.615639       7 log.go:172] (0xc00202a2c0) (0xc0016765a0) Stream added, broadcasting: 3
I0722 12:20:14.618584       7 log.go:172] (0xc00202a2c0) Reply frame received for 3
I0722 12:20:14.618603       7 log.go:172] (0xc00202a2c0) (0xc001e5cbe0) Create stream
I0722 12:20:14.618611       7 log.go:172] (0xc00202a2c0) (0xc001e5cbe0) Stream added, broadcasting: 5
I0722 12:20:14.619473       7 log.go:172] (0xc00202a2c0) Reply frame received for 5
I0722 12:20:14.690977       7 log.go:172] (0xc00202a2c0) Data frame received for 5
I0722 12:20:14.691011       7 log.go:172] (0xc001e5cbe0) (5) Data frame handling
I0722 12:20:14.691045       7 log.go:172] (0xc00202a2c0) Data frame received for 3
I0722 12:20:14.691088       7 log.go:172] (0xc0016765a0) (3) Data frame handling
I0722 12:20:14.691123       7 log.go:172] (0xc0016765a0) (3) Data frame sent
I0722 12:20:14.691147       7 log.go:172] (0xc00202a2c0) Data frame received for 3
I0722 12:20:14.691170       7 log.go:172] (0xc0016765a0) (3) Data frame handling
I0722 12:20:14.692855       7 log.go:172] (0xc00202a2c0) Data frame received for 1
I0722 12:20:14.692881       7 log.go:172] (0xc001e5cb40) (1) Data frame handling
I0722 12:20:14.692904       7 log.go:172] (0xc001e5cb40) (1) Data frame sent
I0722 12:20:14.692920       7 log.go:172] (0xc00202a2c0) (0xc001e5cb40) Stream removed, broadcasting: 1
I0722 12:20:14.692941       7 log.go:172] (0xc00202a2c0) Go away received
I0722 12:20:14.693109       7 log.go:172] (0xc00202a2c0) (0xc001e5cb40) Stream removed, broadcasting: 1
I0722 12:20:14.693137       7 log.go:172] (0xc00202a2c0) (0xc0016765a0) Stream removed, broadcasting: 3
I0722 12:20:14.693154       7 log.go:172] (0xc00202a2c0) (0xc001e5cbe0) Stream removed, broadcasting: 5
Jul 22 12:20:14.693: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:20:14.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-8gct5" for this suite.
Jul 22 12:20:38.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:20:38.788: INFO: namespace: e2e-tests-pod-network-test-8gct5, resource: bindings, ignored listing per whitelist
Jul 22 12:20:38.804: INFO: namespace e2e-tests-pod-network-test-8gct5 deletion completed in 24.107887315s

• [SLOW TEST:52.633 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:20:38.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0722 12:20:39.968455       7 metrics_grabber.go:81] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 22 12:20:39.968: INFO: For apiserver_request_count:
For apiserver_request_latencies_summary:
For etcd_helper_cache_entry_count:
For etcd_helper_cache_hit_count:
For etcd_helper_cache_miss_count:
For etcd_request_cache_add_latencies_summary:
For etcd_request_cache_get_latencies_summary:
For etcd_request_latencies_summary:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:20:39.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-gc-ngcq7" for this suite.
Jul 22 12:20:45.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:20:46.043: INFO: namespace: e2e-tests-gc-ngcq7, resource: bindings, ignored listing per whitelist
Jul 22 12:20:46.064: INFO: namespace e2e-tests-gc-ngcq7 deletion completed in 6.093215297s

• [SLOW TEST:7.259 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:20:46.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating projection with configMap that has name projected-configmap-test-upd-c5304acc-cc15-11ea-aa05-0242ac11000b
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-c5304acc-cc15-11ea-aa05-0242ac11000b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:20:52.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-lmb48" for this suite.
Jul 22 12:21:16.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:21:16.506: INFO: namespace: e2e-tests-projected-lmb48, resource: bindings, ignored listing per whitelist
Jul 22 12:21:16.575: INFO: namespace e2e-tests-projected-lmb48 deletion completed in 24.117492086s

• [SLOW TEST:30.510 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:21:16.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-d7489d4f-cc15-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 22 12:21:16.714: INFO: Waiting up to 5m0s for pod "pod-configmaps-d74b117c-cc15-11ea-aa05-0242ac11000b" in namespace "e2e-tests-configmap-k4ddc" to be "success or failure"
Jul 22 12:21:16.722: INFO: Pod "pod-configmaps-d74b117c-cc15-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.637831ms
Jul 22 12:21:18.726: INFO: Pod "pod-configmaps-d74b117c-cc15-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012553498s
Jul 22 12:21:20.730: INFO: Pod "pod-configmaps-d74b117c-cc15-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016544237s
STEP: Saw pod success
Jul 22 12:21:20.730: INFO: Pod "pod-configmaps-d74b117c-cc15-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 12:21:20.733: INFO: Trying to get logs from node hunter-worker pod pod-configmaps-d74b117c-cc15-11ea-aa05-0242ac11000b container configmap-volume-test: 
STEP: delete the pod
Jul 22 12:21:20.845: INFO: Waiting for pod pod-configmaps-d74b117c-cc15-11ea-aa05-0242ac11000b to disappear
Jul 22 12:21:20.890: INFO: Pod pod-configmaps-d74b117c-cc15-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:21:20.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-k4ddc" for this suite.
Jul 22 12:21:26.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:21:26.994: INFO: namespace: e2e-tests-configmap-k4ddc, resource: bindings, ignored listing per whitelist
Jul 22 12:21:27.000: INFO: namespace e2e-tests-configmap-k4ddc deletion completed in 6.106342777s

• [SLOW TEST:10.425 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:21:27.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward API volume plugin
Jul 22 12:21:27.175: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd8bec26-cc15-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-xljx4" to be "success or failure"
Jul 22 12:21:27.191: INFO: Pod "downwardapi-volume-dd8bec26-cc15-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.109438ms
Jul 22 12:21:29.209: INFO: Pod "downwardapi-volume-dd8bec26-cc15-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033589864s
Jul 22 12:21:31.245: INFO: Pod "downwardapi-volume-dd8bec26-cc15-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07004759s
STEP: Saw pod success
Jul 22 12:21:31.245: INFO: Pod "downwardapi-volume-dd8bec26-cc15-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 12:21:31.248: INFO: Trying to get logs from node hunter-worker2 pod downwardapi-volume-dd8bec26-cc15-11ea-aa05-0242ac11000b container client-container: 
STEP: delete the pod
Jul 22 12:21:31.321: INFO: Waiting for pod downwardapi-volume-dd8bec26-cc15-11ea-aa05-0242ac11000b to disappear
Jul 22 12:21:31.329: INFO: Pod downwardapi-volume-dd8bec26-cc15-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:21:31.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-xljx4" for this suite.
Jul 22 12:21:37.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:21:37.399: INFO: namespace: e2e-tests-projected-xljx4, resource: bindings, ignored listing per whitelist
Jul 22 12:21:37.441: INFO: namespace e2e-tests-projected-xljx4 deletion completed in 6.10876666s

• [SLOW TEST:10.441 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:21:37.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jul 22 12:21:37.587: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-qlcdd,SelfLink:/api/v1/namespaces/e2e-tests-watch-qlcdd/configmaps/e2e-watch-test-label-changed,UID:e3c0cf6a-cc15-11ea-b2c9-0242ac120008,ResourceVersion:2189319,Generation:0,CreationTimestamp:2020-07-22 12:21:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul 22 12:21:37.587: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-qlcdd,SelfLink:/api/v1/namespaces/e2e-tests-watch-qlcdd/configmaps/e2e-watch-test-label-changed,UID:e3c0cf6a-cc15-11ea-b2c9-0242ac120008,ResourceVersion:2189320,Generation:0,CreationTimestamp:2020-07-22 12:21:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jul 22 12:21:37.587: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-qlcdd,SelfLink:/api/v1/namespaces/e2e-tests-watch-qlcdd/configmaps/e2e-watch-test-label-changed,UID:e3c0cf6a-cc15-11ea-b2c9-0242ac120008,ResourceVersion:2189321,Generation:0,CreationTimestamp:2020-07-22 12:21:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jul 22 12:21:47.617: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-qlcdd,SelfLink:/api/v1/namespaces/e2e-tests-watch-qlcdd/configmaps/e2e-watch-test-label-changed,UID:e3c0cf6a-cc15-11ea-b2c9-0242ac120008,ResourceVersion:2189342,Generation:0,CreationTimestamp:2020-07-22 12:21:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul 22 12:21:47.617: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-qlcdd,SelfLink:/api/v1/namespaces/e2e-tests-watch-qlcdd/configmaps/e2e-watch-test-label-changed,UID:e3c0cf6a-cc15-11ea-b2c9-0242ac120008,ResourceVersion:2189343,Generation:0,CreationTimestamp:2020-07-22 12:21:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jul 22 12:21:47.618: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-qlcdd,SelfLink:/api/v1/namespaces/e2e-tests-watch-qlcdd/configmaps/e2e-watch-test-label-changed,UID:e3c0cf6a-cc15-11ea-b2c9-0242ac120008,ResourceVersion:2189344,Generation:0,CreationTimestamp:2020-07-22 12:21:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:21:47.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-watch-qlcdd" for this suite.
Jul 22 12:21:53.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:21:53.794: INFO: namespace: e2e-tests-watch-qlcdd, resource: bindings, ignored listing per whitelist
Jul 22 12:21:53.801: INFO: namespace e2e-tests-watch-qlcdd deletion completed in 6.153476827s

• [SLOW TEST:16.359 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:21:53.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul 22 12:21:53.901: INFO: Waiting up to 5m0s for pod "pod-ed7b0682-cc15-11ea-aa05-0242ac11000b" in namespace "e2e-tests-emptydir-ngq6z" to be "success or failure"
Jul 22 12:21:53.906: INFO: Pod "pod-ed7b0682-cc15-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.673512ms
Jul 22 12:21:55.910: INFO: Pod "pod-ed7b0682-cc15-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009584575s
Jul 22 12:21:57.914: INFO: Pod "pod-ed7b0682-cc15-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013547839s
STEP: Saw pod success
Jul 22 12:21:57.914: INFO: Pod "pod-ed7b0682-cc15-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 12:21:57.917: INFO: Trying to get logs from node hunter-worker pod pod-ed7b0682-cc15-11ea-aa05-0242ac11000b container test-container: 
STEP: delete the pod
Jul 22 12:21:57.942: INFO: Waiting for pod pod-ed7b0682-cc15-11ea-aa05-0242ac11000b to disappear
Jul 22 12:21:57.946: INFO: Pod pod-ed7b0682-cc15-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:21:57.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-ngq6z" for this suite.
Jul 22 12:22:03.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:22:03.974: INFO: namespace: e2e-tests-emptydir-ngq6z, resource: bindings, ignored listing per whitelist
Jul 22 12:22:04.031: INFO: namespace e2e-tests-emptydir-ngq6z deletion completed in 6.080573935s

• [SLOW TEST:10.230 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:22:04.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-map-f394285e-cc15-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 22 12:22:04.153: INFO: Waiting up to 5m0s for pod "pod-configmaps-f395d430-cc15-11ea-aa05-0242ac11000b" in namespace "e2e-tests-configmap-vrldf" to be "success or failure"
Jul 22 12:22:04.156: INFO: Pod "pod-configmaps-f395d430-cc15-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.991109ms
Jul 22 12:22:06.160: INFO: Pod "pod-configmaps-f395d430-cc15-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006654089s
Jul 22 12:22:08.164: INFO: Pod "pod-configmaps-f395d430-cc15-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010415285s
STEP: Saw pod success
Jul 22 12:22:08.164: INFO: Pod "pod-configmaps-f395d430-cc15-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 12:22:08.167: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-f395d430-cc15-11ea-aa05-0242ac11000b container configmap-volume-test: 
STEP: delete the pod
Jul 22 12:22:08.186: INFO: Waiting for pod pod-configmaps-f395d430-cc15-11ea-aa05-0242ac11000b to disappear
Jul 22 12:22:08.202: INFO: Pod pod-configmaps-f395d430-cc15-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:22:08.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-vrldf" for this suite.
Jul 22 12:22:14.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:22:14.330: INFO: namespace: e2e-tests-configmap-vrldf, resource: bindings, ignored listing per whitelist
Jul 22 12:22:14.336: INFO: namespace e2e-tests-configmap-vrldf deletion completed in 6.130935859s

• [SLOW TEST:10.305 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:22:14.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: getting the auto-created API token
Jul 22 12:22:15.023: INFO: created pod pod-service-account-defaultsa
Jul 22 12:22:15.023: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jul 22 12:22:15.041: INFO: created pod pod-service-account-mountsa
Jul 22 12:22:15.041: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jul 22 12:22:15.089: INFO: created pod pod-service-account-nomountsa
Jul 22 12:22:15.089: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jul 22 12:22:15.095: INFO: created pod pod-service-account-defaultsa-mountspec
Jul 22 12:22:15.095: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jul 22 12:22:15.169: INFO: created pod pod-service-account-mountsa-mountspec
Jul 22 12:22:15.169: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jul 22 12:22:15.186: INFO: created pod pod-service-account-nomountsa-mountspec
Jul 22 12:22:15.186: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jul 22 12:22:15.223: INFO: created pod pod-service-account-defaultsa-nomountspec
Jul 22 12:22:15.223: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jul 22 12:22:15.258: INFO: created pod pod-service-account-mountsa-nomountspec
Jul 22 12:22:15.258: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jul 22 12:22:15.318: INFO: created pod pod-service-account-nomountsa-nomountspec
Jul 22 12:22:15.318: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:22:15.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-svcaccounts-dvs5w" for this suite.
Jul 22 12:22:45.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:22:45.531: INFO: namespace: e2e-tests-svcaccounts-dvs5w, resource: bindings, ignored listing per whitelist
Jul 22 12:22:45.538: INFO: namespace e2e-tests-svcaccounts-dvs5w deletion completed in 30.198171324s

• [SLOW TEST:31.201 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:22:45.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul 22 12:22:45.677: INFO: Waiting up to 5m0s for pod "pod-0c565da9-cc16-11ea-aa05-0242ac11000b" in namespace "e2e-tests-emptydir-9wcm8" to be "success or failure"
Jul 22 12:22:45.684: INFO: Pod "pod-0c565da9-cc16-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.704515ms
Jul 22 12:22:47.688: INFO: Pod "pod-0c565da9-cc16-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01096561s
Jul 22 12:22:49.697: INFO: Pod "pod-0c565da9-cc16-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019458695s
STEP: Saw pod success
Jul 22 12:22:49.697: INFO: Pod "pod-0c565da9-cc16-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 12:22:49.700: INFO: Trying to get logs from node hunter-worker2 pod pod-0c565da9-cc16-11ea-aa05-0242ac11000b container test-container: 
STEP: delete the pod
Jul 22 12:22:49.715: INFO: Waiting for pod pod-0c565da9-cc16-11ea-aa05-0242ac11000b to disappear
Jul 22 12:22:49.720: INFO: Pod pod-0c565da9-cc16-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:22:49.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-9wcm8" for this suite.
Jul 22 12:22:55.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:22:55.783: INFO: namespace: e2e-tests-emptydir-9wcm8, resource: bindings, ignored listing per whitelist
Jul 22 12:22:55.839: INFO: namespace e2e-tests-emptydir-9wcm8 deletion completed in 6.114840315s

• [SLOW TEST:10.301 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:22:55.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name projected-configmap-test-volume-12756cb6-cc16-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 22 12:22:55.962: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1279ac0c-cc16-11ea-aa05-0242ac11000b" in namespace "e2e-tests-projected-jdl7n" to be "success or failure"
Jul 22 12:22:55.976: INFO: Pod "pod-projected-configmaps-1279ac0c-cc16-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.877881ms
Jul 22 12:22:57.981: INFO: Pod "pod-projected-configmaps-1279ac0c-cc16-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018285819s
Jul 22 12:22:59.985: INFO: Pod "pod-projected-configmaps-1279ac0c-cc16-11ea-aa05-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.022596548s
Jul 22 12:23:01.989: INFO: Pod "pod-projected-configmaps-1279ac0c-cc16-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026967777s
STEP: Saw pod success
Jul 22 12:23:01.990: INFO: Pod "pod-projected-configmaps-1279ac0c-cc16-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 12:23:01.993: INFO: Trying to get logs from node hunter-worker pod pod-projected-configmaps-1279ac0c-cc16-11ea-aa05-0242ac11000b container projected-configmap-volume-test: 
STEP: delete the pod
Jul 22 12:23:02.025: INFO: Waiting for pod pod-projected-configmaps-1279ac0c-cc16-11ea-aa05-0242ac11000b to disappear
Jul 22 12:23:02.030: INFO: Pod pod-projected-configmaps-1279ac0c-cc16-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:23:02.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-projected-jdl7n" for this suite.
Jul 22 12:23:08.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:23:08.054: INFO: namespace: e2e-tests-projected-jdl7n, resource: bindings, ignored listing per whitelist
Jul 22 12:23:08.122: INFO: namespace e2e-tests-projected-jdl7n deletion completed in 6.088384725s

• [SLOW TEST:12.282 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:23:08.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:23:37.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-runtime-q2gsn" for this suite.
Jul 22 12:23:43.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:23:43.722: INFO: namespace: e2e-tests-container-runtime-q2gsn, resource: bindings, ignored listing per whitelist
Jul 22 12:23:43.729: INFO: namespace e2e-tests-container-runtime-q2gsn deletion completed in 6.085763684s

• [SLOW TEST:35.606 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:23:43.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
Jul 22 12:23:47.878: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-2f018a20-cc16-11ea-aa05-0242ac11000b", GenerateName:"", Namespace:"e2e-tests-pods-dmr6n", SelfLink:"/api/v1/namespaces/e2e-tests-pods-dmr6n/pods/pod-submit-remove-2f018a20-cc16-11ea-aa05-0242ac11000b", UID:"2f070b0a-cc16-11ea-b2c9-0242ac120008", ResourceVersion:"2189856", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731017423, loc:(*time.Location)(0x7950ac0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"819837216"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-jg8h9", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00204e6c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jg8h9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001a30618), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"hunter-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00235d020), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001a30660)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001a30680)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001a30688), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001a3068c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731017423, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731017427, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731017427, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731017423, loc:(*time.Location)(0x7950ac0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.4", PodIP:"10.244.2.186", StartTime:(*v1.Time)(0xc00135a340), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00135a360), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"docker.io/library/nginx:1.14-alpine", ImageID:"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"containerd://8120609b1025db6bae5101755a473f8a3eff42df03b7bee025719fb4492d71ac"}}, QOSClass:"BestEffort"}}
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:23:57.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-dmr6n" for this suite.
Jul 22 12:24:03.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:24:03.654: INFO: namespace: e2e-tests-pods-dmr6n, resource: bindings, ignored listing per whitelist
Jul 22 12:24:03.661: INFO: namespace e2e-tests-pods-dmr6n deletion completed in 6.093862874s

• [SLOW TEST:19.933 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:24:03.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating configMap with name configmap-test-volume-3ae8837e-cc16-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume configMaps
Jul 22 12:24:03.807: INFO: Waiting up to 5m0s for pod "pod-configmaps-3ae90dec-cc16-11ea-aa05-0242ac11000b" in namespace "e2e-tests-configmap-wds65" to be "success or failure"
Jul 22 12:24:03.811: INFO: Pod "pod-configmaps-3ae90dec-cc16-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.714661ms
Jul 22 12:24:05.864: INFO: Pod "pod-configmaps-3ae90dec-cc16-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056440906s
Jul 22 12:24:07.867: INFO: Pod "pod-configmaps-3ae90dec-cc16-11ea-aa05-0242ac11000b": Phase="Running", Reason="", readiness=true. Elapsed: 4.059506594s
Jul 22 12:24:09.872: INFO: Pod "pod-configmaps-3ae90dec-cc16-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063969521s
STEP: Saw pod success
Jul 22 12:24:09.872: INFO: Pod "pod-configmaps-3ae90dec-cc16-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 12:24:09.875: INFO: Trying to get logs from node hunter-worker2 pod pod-configmaps-3ae90dec-cc16-11ea-aa05-0242ac11000b container configmap-volume-test: 
STEP: delete the pod
Jul 22 12:24:09.903: INFO: Waiting for pod pod-configmaps-3ae90dec-cc16-11ea-aa05-0242ac11000b to disappear
Jul 22 12:24:09.908: INFO: Pod pod-configmaps-3ae90dec-cc16-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:24:09.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-configmap-wds65" for this suite.
Jul 22 12:24:15.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:24:16.001: INFO: namespace: e2e-tests-configmap-wds65, resource: bindings, ignored listing per whitelist
Jul 22 12:24:16.010: INFO: namespace e2e-tests-configmap-wds65 deletion completed in 6.098040425s

• [SLOW TEST:12.348 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:24:16.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:24:20.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubelet-test-62qwc" for this suite.
Jul 22 12:25:00.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:25:00.223: INFO: namespace: e2e-tests-kubelet-test-62qwc, resource: bindings, ignored listing per whitelist
Jul 22 12:25:00.274: INFO: namespace e2e-tests-kubelet-test-62qwc deletion completed in 40.091290041s

• [SLOW TEST:44.264 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:25:00.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: validating cluster-info
Jul 22 12:25:00.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jul 22 12:25:00.488: INFO: stderr: ""
Jul 22 12:25:00.488: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45709\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:45709/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:25:00.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-j77ps" for this suite.
Jul 22 12:25:06.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:25:06.537: INFO: namespace: e2e-tests-kubectl-j77ps, resource: bindings, ignored listing per whitelist
Jul 22 12:25:06.583: INFO: namespace e2e-tests-kubectl-j77ps deletion completed in 6.091611276s

• [SLOW TEST:6.309 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:25:06.584: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 22 12:25:06.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:25:10.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-d42h2" for this suite.
Jul 22 12:25:48.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:25:48.952: INFO: namespace: e2e-tests-pods-d42h2, resource: bindings, ignored listing per whitelist
Jul 22 12:25:48.971: INFO: namespace e2e-tests-pods-d42h2 deletion completed in 38.084663011s

• [SLOW TEST:42.387 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:25:48.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jul 22 12:25:57.130: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 22 12:25:57.178: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 22 12:25:59.178: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 22 12:25:59.182: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 22 12:26:01.178: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 22 12:26:01.183: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 22 12:26:03.178: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 22 12:26:03.182: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 22 12:26:05.178: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 22 12:26:05.182: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 22 12:26:07.178: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 22 12:26:07.182: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 22 12:26:09.178: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 22 12:26:09.182: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 22 12:26:11.178: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 22 12:26:11.182: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 22 12:26:13.178: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 22 12:26:13.182: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 22 12:26:15.178: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 22 12:26:15.182: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 22 12:26:17.178: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 22 12:26:17.183: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 22 12:26:19.178: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 22 12:26:19.182: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 22 12:26:21.178: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 22 12:26:21.182: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 22 12:26:23.178: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 22 12:26:23.183: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 22 12:26:25.178: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 22 12:26:25.183: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 22 12:26:27.178: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 22 12:26:27.183: INFO: Pod pod-with-prestop-exec-hook still exists
Jul 22 12:26:29.178: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul 22 12:26:29.182: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:26:29.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-4pgxm" for this suite.
Jul 22 12:26:53.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:26:53.246: INFO: namespace: e2e-tests-container-lifecycle-hook-4pgxm, resource: bindings, ignored listing per whitelist
Jul 22 12:26:53.288: INFO: namespace e2e-tests-container-lifecycle-hook-4pgxm deletion completed in 24.094926675s

• [SLOW TEST:64.317 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:26:53.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul 22 12:26:53.440: INFO: Waiting up to 5m0s for pod "pod-a0034405-cc16-11ea-aa05-0242ac11000b" in namespace "e2e-tests-emptydir-sl4lj" to be "success or failure"
Jul 22 12:26:53.474: INFO: Pod "pod-a0034405-cc16-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 33.861635ms
Jul 22 12:26:55.550: INFO: Pod "pod-a0034405-cc16-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109176785s
Jul 22 12:26:57.553: INFO: Pod "pod-a0034405-cc16-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.112721192s
STEP: Saw pod success
Jul 22 12:26:57.553: INFO: Pod "pod-a0034405-cc16-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 12:26:57.556: INFO: Trying to get logs from node hunter-worker pod pod-a0034405-cc16-11ea-aa05-0242ac11000b container test-container: 
STEP: delete the pod
Jul 22 12:26:57.623: INFO: Waiting for pod pod-a0034405-cc16-11ea-aa05-0242ac11000b to disappear
Jul 22 12:26:57.648: INFO: Pod pod-a0034405-cc16-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:26:57.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-emptydir-sl4lj" for this suite.
Jul 22 12:27:03.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:27:03.686: INFO: namespace: e2e-tests-emptydir-sl4lj, resource: bindings, ignored listing per whitelist
Jul 22 12:27:03.744: INFO: namespace e2e-tests-emptydir-sl4lj deletion completed in 6.093272287s

• [SLOW TEST:10.456 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,default) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:27:03.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating secret with name secret-test-map-a63d331b-cc16-11ea-aa05-0242ac11000b
STEP: Creating a pod to test consume secrets
Jul 22 12:27:03.917: INFO: Waiting up to 5m0s for pod "pod-secrets-a63dd38a-cc16-11ea-aa05-0242ac11000b" in namespace "e2e-tests-secrets-2t2wn" to be "success or failure"
Jul 22 12:27:03.923: INFO: Pod "pod-secrets-a63dd38a-cc16-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.514679ms
Jul 22 12:27:05.996: INFO: Pod "pod-secrets-a63dd38a-cc16-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07932892s
Jul 22 12:27:08.001: INFO: Pod "pod-secrets-a63dd38a-cc16-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083778227s
STEP: Saw pod success
Jul 22 12:27:08.001: INFO: Pod "pod-secrets-a63dd38a-cc16-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 12:27:08.004: INFO: Trying to get logs from node hunter-worker2 pod pod-secrets-a63dd38a-cc16-11ea-aa05-0242ac11000b container secret-volume-test: 
STEP: delete the pod
Jul 22 12:27:08.039: INFO: Waiting for pod pod-secrets-a63dd38a-cc16-11ea-aa05-0242ac11000b to disappear
Jul 22 12:27:08.075: INFO: Pod pod-secrets-a63dd38a-cc16-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:27:08.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-secrets-2t2wn" for this suite.
Jul 22 12:27:14.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:27:14.200: INFO: namespace: e2e-tests-secrets-2t2wn, resource: bindings, ignored listing per whitelist
Jul 22 12:27:14.225: INFO: namespace e2e-tests-secrets-2t2wn deletion completed in 6.141883588s

• [SLOW TEST:10.481 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:27:14.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
Jul 22 12:27:14.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:27:18.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-524ws" for this suite.
Jul 22 12:28:02.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:28:02.497: INFO: namespace: e2e-tests-pods-524ws, resource: bindings, ignored listing per whitelist
Jul 22 12:28:02.523: INFO: namespace e2e-tests-pods-524ws deletion completed in 44.144141409s

• [SLOW TEST:48.297 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:28:02.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-p2g5n
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 22 12:28:02.609: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul 22 12:28:28.806: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.1.143 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-p2g5n PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 22 12:28:28.806: INFO: >>> kubeConfig: /root/.kube/config
I0722 12:28:28.835878       7 log.go:172] (0xc000fd02c0) (0xc0011a57c0) Create stream
I0722 12:28:28.835917       7 log.go:172] (0xc000fd02c0) (0xc0011a57c0) Stream added, broadcasting: 1
I0722 12:28:28.845486       7 log.go:172] (0xc000fd02c0) Reply frame received for 1
I0722 12:28:28.845556       7 log.go:172] (0xc000fd02c0) (0xc001712c80) Create stream
I0722 12:28:28.845574       7 log.go:172] (0xc000fd02c0) (0xc001712c80) Stream added, broadcasting: 3
I0722 12:28:28.846701       7 log.go:172] (0xc000fd02c0) Reply frame received for 3
I0722 12:28:28.846747       7 log.go:172] (0xc000fd02c0) (0xc0011a5860) Create stream
I0722 12:28:28.846762       7 log.go:172] (0xc000fd02c0) (0xc0011a5860) Stream added, broadcasting: 5
I0722 12:28:28.847889       7 log.go:172] (0xc000fd02c0) Reply frame received for 5
I0722 12:28:29.908392       7 log.go:172] (0xc000fd02c0) Data frame received for 3
I0722 12:28:29.908445       7 log.go:172] (0xc001712c80) (3) Data frame handling
I0722 12:28:29.908484       7 log.go:172] (0xc001712c80) (3) Data frame sent
I0722 12:28:29.908519       7 log.go:172] (0xc000fd02c0) Data frame received for 3
I0722 12:28:29.908547       7 log.go:172] (0xc001712c80) (3) Data frame handling
I0722 12:28:29.909463       7 log.go:172] (0xc000fd02c0) Data frame received for 5
I0722 12:28:29.909516       7 log.go:172] (0xc0011a5860) (5) Data frame handling
I0722 12:28:29.911239       7 log.go:172] (0xc000fd02c0) Data frame received for 1
I0722 12:28:29.911297       7 log.go:172] (0xc0011a57c0) (1) Data frame handling
I0722 12:28:29.911332       7 log.go:172] (0xc0011a57c0) (1) Data frame sent
I0722 12:28:29.911354       7 log.go:172] (0xc000fd02c0) (0xc0011a57c0) Stream removed, broadcasting: 1
I0722 12:28:29.911374       7 log.go:172] (0xc000fd02c0) Go away received
I0722 12:28:29.911528       7 log.go:172] (0xc000fd02c0) (0xc0011a57c0) Stream removed, broadcasting: 1
I0722 12:28:29.911561       7 log.go:172] (0xc000fd02c0) (0xc001712c80) Stream removed, broadcasting: 3
I0722 12:28:29.911588       7 log.go:172] (0xc000fd02c0) (0xc0011a5860) Stream removed, broadcasting: 5
Jul 22 12:28:29.911: INFO: Found all expected endpoints: [netserver-0]
Jul 22 12:28:29.915: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.244.2.191 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-p2g5n PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 22 12:28:29.915: INFO: >>> kubeConfig: /root/.kube/config
I0722 12:28:29.947306       7 log.go:172] (0xc002690580) (0xc001712e60) Create stream
I0722 12:28:29.947339       7 log.go:172] (0xc002690580) (0xc001712e60) Stream added, broadcasting: 1
I0722 12:28:29.949372       7 log.go:172] (0xc002690580) Reply frame received for 1
I0722 12:28:29.949433       7 log.go:172] (0xc002690580) (0xc0028d52c0) Create stream
I0722 12:28:29.949460       7 log.go:172] (0xc002690580) (0xc0028d52c0) Stream added, broadcasting: 3
I0722 12:28:29.950485       7 log.go:172] (0xc002690580) Reply frame received for 3
I0722 12:28:29.950530       7 log.go:172] (0xc002690580) (0xc002a86780) Create stream
I0722 12:28:29.950547       7 log.go:172] (0xc002690580) (0xc002a86780) Stream added, broadcasting: 5
I0722 12:28:29.951527       7 log.go:172] (0xc002690580) Reply frame received for 5
I0722 12:28:31.017994       7 log.go:172] (0xc002690580) Data frame received for 3
I0722 12:28:31.018045       7 log.go:172] (0xc0028d52c0) (3) Data frame handling
I0722 12:28:31.018079       7 log.go:172] (0xc0028d52c0) (3) Data frame sent
I0722 12:28:31.018123       7 log.go:172] (0xc002690580) Data frame received for 3
I0722 12:28:31.018162       7 log.go:172] (0xc0028d52c0) (3) Data frame handling
I0722 12:28:31.018359       7 log.go:172] (0xc002690580) Data frame received for 5
I0722 12:28:31.018396       7 log.go:172] (0xc002a86780) (5) Data frame handling
I0722 12:28:31.020589       7 log.go:172] (0xc002690580) Data frame received for 1
I0722 12:28:31.020668       7 log.go:172] (0xc001712e60) (1) Data frame handling
I0722 12:28:31.020818       7 log.go:172] (0xc001712e60) (1) Data frame sent
I0722 12:28:31.020849       7 log.go:172] (0xc002690580) (0xc001712e60) Stream removed, broadcasting: 1
I0722 12:28:31.020880       7 log.go:172] (0xc002690580) Go away received
I0722 12:28:31.021171       7 log.go:172] (0xc002690580) (0xc001712e60) Stream removed, broadcasting: 1
I0722 12:28:31.021198       7 log.go:172] (0xc002690580) (0xc0028d52c0) Stream removed, broadcasting: 3
I0722 12:28:31.021213       7 log.go:172] (0xc002690580) (0xc002a86780) Stream removed, broadcasting: 5
Jul 22 12:28:31.021: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:28:31.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-pod-network-test-p2g5n" for this suite.
Jul 22 12:28:55.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:28:55.099: INFO: namespace: e2e-tests-pod-network-test-p2g5n, resource: bindings, ignored listing per whitelist
Jul 22 12:28:55.117: INFO: namespace e2e-tests-pod-network-test-p2g5n deletion completed in 24.090359896s

• [SLOW TEST:52.594 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:28:55.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul 22 12:28:55.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-7jm8b'
Jul 22 12:28:55.418: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul 22 12:28:55.418: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jul 22 12:28:55.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-7jm8b'
Jul 22 12:28:55.535: INFO: stderr: ""
Jul 22 12:28:55.535: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:28:55.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-7jm8b" for this suite.
Jul 22 12:29:17.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:29:17.645: INFO: namespace: e2e-tests-kubectl-7jm8b, resource: bindings, ignored listing per whitelist
Jul 22 12:29:17.653: INFO: namespace e2e-tests-kubectl-7jm8b deletion completed in 22.115251207s

• [SLOW TEST:22.537 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:29:17.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: creating all guestbook components
Jul 22 12:29:17.744: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Jul 22 12:29:17.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wtcjv'
Jul 22 12:29:18.057: INFO: stderr: ""
Jul 22 12:29:18.057: INFO: stdout: "service/redis-slave created\n"
Jul 22 12:29:18.058: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Jul 22 12:29:18.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wtcjv'
Jul 22 12:29:18.386: INFO: stderr: ""
Jul 22 12:29:18.386: INFO: stdout: "service/redis-master created\n"
Jul 22 12:29:18.386: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jul 22 12:29:18.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wtcjv'
Jul 22 12:29:18.663: INFO: stderr: ""
Jul 22 12:29:18.663: INFO: stdout: "service/frontend created\n"
Jul 22 12:29:18.664: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Jul 22 12:29:18.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wtcjv'
Jul 22 12:29:18.940: INFO: stderr: ""
Jul 22 12:29:18.940: INFO: stdout: "deployment.extensions/frontend created\n"
Jul 22 12:29:18.940: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jul 22 12:29:18.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wtcjv'
Jul 22 12:29:19.249: INFO: stderr: ""
Jul 22 12:29:19.249: INFO: stdout: "deployment.extensions/redis-master created\n"
Jul 22 12:29:19.249: INFO: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Jul 22 12:29:19.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-wtcjv'
Jul 22 12:29:19.544: INFO: stderr: ""
Jul 22 12:29:19.544: INFO: stdout: "deployment.extensions/redis-slave created\n"
STEP: validating guestbook app
Jul 22 12:29:19.544: INFO: Waiting for all frontend pods to be Running.
Jul 22 12:29:29.594: INFO: Waiting for frontend to serve content.
Jul 22 12:29:29.611: INFO: Trying to add a new entry to the guestbook.
Jul 22 12:29:29.625: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jul 22 12:29:29.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wtcjv'
Jul 22 12:29:32.082: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 22 12:29:32.082: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Jul 22 12:29:32.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wtcjv'
Jul 22 12:29:32.272: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 22 12:29:32.272: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jul 22 12:29:32.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wtcjv'
Jul 22 12:29:32.441: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 22 12:29:32.441: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul 22 12:29:32.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wtcjv'
Jul 22 12:29:32.591: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 22 12:29:32.591: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul 22 12:29:32.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wtcjv'
Jul 22 12:29:32.728: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 22 12:29:32.728: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Jul 22 12:29:32.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wtcjv'
Jul 22 12:29:32.976: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 22 12:29:32.976: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:29:32.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-wtcjv" for this suite.
Jul 22 12:30:13.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:30:13.407: INFO: namespace: e2e-tests-kubectl-wtcjv, resource: bindings, ignored listing per whitelist
Jul 22 12:30:13.430: INFO: namespace e2e-tests-kubectl-wtcjv deletion completed in 40.152512417s

• [SLOW TEST:55.777 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:30:13.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating a pod to test downward api env vars
Jul 22 12:30:13.559: INFO: Waiting up to 5m0s for pod "downward-api-174e4f7b-cc17-11ea-aa05-0242ac11000b" in namespace "e2e-tests-downward-api-ghfxb" to be "success or failure"
Jul 22 12:30:13.578: INFO: Pod "downward-api-174e4f7b-cc17-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.975486ms
Jul 22 12:30:15.602: INFO: Pod "downward-api-174e4f7b-cc17-11ea-aa05-0242ac11000b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04268531s
Jul 22 12:30:17.606: INFO: Pod "downward-api-174e4f7b-cc17-11ea-aa05-0242ac11000b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047270603s
STEP: Saw pod success
Jul 22 12:30:17.606: INFO: Pod "downward-api-174e4f7b-cc17-11ea-aa05-0242ac11000b" satisfied condition "success or failure"
Jul 22 12:30:17.610: INFO: Trying to get logs from node hunter-worker pod downward-api-174e4f7b-cc17-11ea-aa05-0242ac11000b container dapi-container: 
STEP: delete the pod
Jul 22 12:30:17.703: INFO: Waiting for pod downward-api-174e4f7b-cc17-11ea-aa05-0242ac11000b to disappear
Jul 22 12:30:17.714: INFO: Pod downward-api-174e4f7b-cc17-11ea-aa05-0242ac11000b no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:30:17.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-downward-api-ghfxb" for this suite.
Jul 22 12:30:23.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:30:23.797: INFO: namespace: e2e-tests-downward-api-ghfxb, resource: bindings, ignored listing per whitelist
Jul 22 12:30:23.823: INFO: namespace e2e-tests-downward-api-ghfxb deletion completed in 6.106546522s

• [SLOW TEST:10.393 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:30:23.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Creating pod pod-subpath-test-projected-tbhs
STEP: Creating a pod to test atomic-volume-subpath
Jul 22 12:30:23.950: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-tbhs" in namespace "e2e-tests-subpath-wfxrf" to be "success or failure"
Jul 22 12:30:23.984: INFO: Pod "pod-subpath-test-projected-tbhs": Phase="Pending", Reason="", readiness=false. Elapsed: 34.205501ms
Jul 22 12:30:26.067: INFO: Pod "pod-subpath-test-projected-tbhs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117656293s
Jul 22 12:30:28.072: INFO: Pod "pod-subpath-test-projected-tbhs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.121818172s
Jul 22 12:30:30.266: INFO: Pod "pod-subpath-test-projected-tbhs": Phase="Running", Reason="", readiness=true. Elapsed: 6.315994746s
Jul 22 12:30:32.270: INFO: Pod "pod-subpath-test-projected-tbhs": Phase="Running", Reason="", readiness=false. Elapsed: 8.320681251s
Jul 22 12:30:34.275: INFO: Pod "pod-subpath-test-projected-tbhs": Phase="Running", Reason="", readiness=false. Elapsed: 10.325206239s
Jul 22 12:30:36.279: INFO: Pod "pod-subpath-test-projected-tbhs": Phase="Running", Reason="", readiness=false. Elapsed: 12.329295905s
Jul 22 12:30:38.283: INFO: Pod "pod-subpath-test-projected-tbhs": Phase="Running", Reason="", readiness=false. Elapsed: 14.333506191s
Jul 22 12:30:40.287: INFO: Pod "pod-subpath-test-projected-tbhs": Phase="Running", Reason="", readiness=false. Elapsed: 16.337660345s
Jul 22 12:30:42.291: INFO: Pod "pod-subpath-test-projected-tbhs": Phase="Running", Reason="", readiness=false. Elapsed: 18.341271625s
Jul 22 12:30:44.295: INFO: Pod "pod-subpath-test-projected-tbhs": Phase="Running", Reason="", readiness=false. Elapsed: 20.345559019s
Jul 22 12:30:46.325: INFO: Pod "pod-subpath-test-projected-tbhs": Phase="Running", Reason="", readiness=false. Elapsed: 22.375626696s
Jul 22 12:30:48.329: INFO: Pod "pod-subpath-test-projected-tbhs": Phase="Running", Reason="", readiness=false. Elapsed: 24.379655815s
Jul 22 12:30:50.409: INFO: Pod "pod-subpath-test-projected-tbhs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.45969776s
STEP: Saw pod success
Jul 22 12:30:50.410: INFO: Pod "pod-subpath-test-projected-tbhs" satisfied condition "success or failure"
Jul 22 12:30:50.412: INFO: Trying to get logs from node hunter-worker pod pod-subpath-test-projected-tbhs container test-container-subpath-projected-tbhs: 
STEP: delete the pod
Jul 22 12:30:50.460: INFO: Waiting for pod pod-subpath-test-projected-tbhs to disappear
Jul 22 12:30:50.470: INFO: Pod pod-subpath-test-projected-tbhs no longer exists
STEP: Deleting pod pod-subpath-test-projected-tbhs
Jul 22 12:30:50.470: INFO: Deleting pod "pod-subpath-test-projected-tbhs" in namespace "e2e-tests-subpath-wfxrf"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:30:50.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-subpath-wfxrf" for this suite.
Jul 22 12:30:56.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:30:56.529: INFO: namespace: e2e-tests-subpath-wfxrf, resource: bindings, ignored listing per whitelist
Jul 22 12:30:56.590: INFO: namespace e2e-tests-subpath-wfxrf deletion completed in 6.113245195s

• [SLOW TEST:32.766 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:30:56.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
STEP: creating an rc
Jul 22 12:30:56.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=e2e-tests-kubectl-hrg5p'
Jul 22 12:30:56.941: INFO: stderr: ""
Jul 22 12:30:56.941: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Waiting for Redis master to start.
Jul 22 12:30:57.946: INFO: Selector matched 1 pods for map[app:redis]
Jul 22 12:30:57.946: INFO: Found 0 / 1
Jul 22 12:30:58.946: INFO: Selector matched 1 pods for map[app:redis]
Jul 22 12:30:58.946: INFO: Found 0 / 1
Jul 22 12:30:59.946: INFO: Selector matched 1 pods for map[app:redis]
Jul 22 12:30:59.946: INFO: Found 0 / 1
Jul 22 12:31:00.946: INFO: Selector matched 1 pods for map[app:redis]
Jul 22 12:31:00.946: INFO: Found 1 / 1
Jul 22 12:31:00.946: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul 22 12:31:00.949: INFO: Selector matched 1 pods for map[app:redis]
Jul 22 12:31:00.949: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jul 22 12:31:00.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zb5fg redis-master --namespace=e2e-tests-kubectl-hrg5p'
Jul 22 12:31:01.071: INFO: stderr: ""
Jul 22 12:31:01.071: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 22 Jul 12:30:59.750 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Jul 12:30:59.750 # Server started, Redis version 3.2.12\n1:M 22 Jul 12:30:59.750 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Jul 12:30:59.750 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jul 22 12:31:01.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-zb5fg redis-master --namespace=e2e-tests-kubectl-hrg5p --tail=1'
Jul 22 12:31:01.173: INFO: stderr: ""
Jul 22 12:31:01.173: INFO: stdout: "1:M 22 Jul 12:30:59.750 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jul 22 12:31:01.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-zb5fg redis-master --namespace=e2e-tests-kubectl-hrg5p --limit-bytes=1'
Jul 22 12:31:01.282: INFO: stderr: ""
Jul 22 12:31:01.282: INFO: stdout: " "
STEP: exposing timestamps
Jul 22 12:31:01.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-zb5fg redis-master --namespace=e2e-tests-kubectl-hrg5p --tail=1 --timestamps'
Jul 22 12:31:01.390: INFO: stderr: ""
Jul 22 12:31:01.390: INFO: stdout: "2020-07-22T12:30:59.751124044Z 1:M 22 Jul 12:30:59.750 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jul 22 12:31:03.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-zb5fg redis-master --namespace=e2e-tests-kubectl-hrg5p --since=1s'
Jul 22 12:31:04.012: INFO: stderr: ""
Jul 22 12:31:04.012: INFO: stdout: ""
Jul 22 12:31:04.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config log redis-master-zb5fg redis-master --namespace=e2e-tests-kubectl-hrg5p --since=24h'
Jul 22 12:31:04.263: INFO: stderr: ""
Jul 22 12:31:04.263: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 22 Jul 12:30:59.750 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Jul 12:30:59.750 # Server started, Redis version 3.2.12\n1:M 22 Jul 12:30:59.750 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Jul 12:30:59.750 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140
STEP: using delete to clean up resources
Jul 22 12:31:04.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-hrg5p'
Jul 22 12:31:04.493: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 22 12:31:04.493: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jul 22 12:31:04.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-hrg5p'
Jul 22 12:31:04.622: INFO: stderr: "No resources found.\n"
Jul 22 12:31:04.622: INFO: stdout: ""
Jul 22 12:31:04.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=e2e-tests-kubectl-hrg5p -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 22 12:31:04.721: INFO: stderr: ""
Jul 22 12:31:04.721: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:31:04.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-kubectl-hrg5p" for this suite.
Jul 22 12:31:27.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:31:27.074: INFO: namespace: e2e-tests-kubectl-hrg5p, resource: bindings, ignored listing per whitelist
Jul 22 12:31:27.098: INFO: namespace e2e-tests-kubectl-hrg5p deletion completed in 22.372718606s

• [SLOW TEST:30.508 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:31:27.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
STEP: Creating service test in namespace e2e-tests-statefulset-sdbvp
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-sdbvp
STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-sdbvp
STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-sdbvp
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-sdbvp
Jul 22 12:31:33.274: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-sdbvp, name: ss-0, uid: 4683bad7-cc17-11ea-b2c9-0242ac120008, status phase: Pending. Waiting for statefulset controller to delete.
Jul 22 12:31:33.300: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-sdbvp, name: ss-0, uid: 4683bad7-cc17-11ea-b2c9-0242ac120008, status phase: Failed. Waiting for statefulset controller to delete.
Jul 22 12:31:33.324: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-sdbvp, name: ss-0, uid: 4683bad7-cc17-11ea-b2c9-0242ac120008, status phase: Failed. Waiting for statefulset controller to delete.
Jul 22 12:31:33.334: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-sdbvp
STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-sdbvp
STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-sdbvp and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
Jul 22 12:31:37.418: INFO: Deleting all statefulset in ns e2e-tests-statefulset-sdbvp
Jul 22 12:31:37.421: INFO: Scaling statefulset ss to 0
Jul 22 12:31:57.439: INFO: Waiting for statefulset status.replicas updated to 0
Jul 22 12:31:57.442: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:31:57.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-statefulset-sdbvp" for this suite.
Jul 22 12:32:03.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:32:03.529: INFO: namespace: e2e-tests-statefulset-sdbvp, resource: bindings, ignored listing per whitelist
Jul 22 12:32:03.558: INFO: namespace e2e-tests-statefulset-sdbvp deletion completed in 6.094456344s

• [SLOW TEST:36.460 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
STEP: Creating a kubernetes client
Jul 22 12:32:03.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
Jul 22 12:32:12.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-tests-replication-controller-8b222" for this suite.
Jul 22 12:32:34.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul 22 12:32:34.857: INFO: namespace: e2e-tests-replication-controller-8b222, resource: bindings, ignored listing per whitelist
Jul 22 12:32:34.860: INFO: namespace e2e-tests-replication-controller-8b222 deletion completed in 22.122817369s

• [SLOW TEST:31.303 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSJul 22 12:32:34.861: INFO: Running AfterSuite actions on all nodes
Jul 22 12:32:34.861: INFO: Running AfterSuite actions on node 1
Jul 22 12:32:34.861: INFO: Skipping dumping logs from cluster

Ran 200 of 2164 Specs in 6328.899 seconds
SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1964 Skipped PASS